1. Importing the required libraries¶
# Data cleaning and eda
import warnings
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
# Data processing and data augmentation
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.gridspec as gridspec
from imblearn.over_sampling import SMOTE
from sklearn.feature_selection import mutual_info_classif
# Model general
from sklearn.metrics import accuracy_score, f1_score, confusion_matrix, ConfusionMatrixDisplay, precision_score, recall_score, roc_auc_score
from sklearn.model_selection import cross_val_score, GridSearchCV
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
# SVM
from sklearn.svm import SVC
# Random forest
from sklearn.ensemble import RandomForestClassifier
# CNN
# !pip install tensorflow
import tensorflow as tf
# Shap
!pip install shap
import shap
warnings.filterwarnings("ignore")
Collecting shap
Downloading shap-0.43.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (532 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 532.9/532.9 kB 4.7 MB/s eta 0:00:00
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from shap) (1.23.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from shap) (1.11.3)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.10/dist-packages (from shap) (1.2.2)
Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from shap) (1.5.3)
Requirement already satisfied: tqdm>=4.27.0 in /usr/local/lib/python3.10/dist-packages (from shap) (4.66.1)
Requirement already satisfied: packaging>20.9 in /usr/local/lib/python3.10/dist-packages (from shap) (23.2)
Collecting slicer==0.0.7 (from shap)
Downloading slicer-0.0.7-py3-none-any.whl (14 kB)
Requirement already satisfied: numba in /usr/local/lib/python3.10/dist-packages (from shap) (0.56.4)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.10/dist-packages (from shap) (2.2.1)
Requirement already satisfied: llvmlite<0.40,>=0.39.0dev0 in /usr/local/lib/python3.10/dist-packages (from numba->shap) (0.39.1)
Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from numba->shap) (67.7.2)
Requirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.10/dist-packages (from pandas->shap) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->shap) (2023.3.post1)
Requirement already satisfied: joblib>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->shap) (1.3.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->shap) (3.2.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.1->pandas->shap) (1.16.0)
Installing collected packages: slicer, shap
Successfully installed shap-0.43.0 slicer-0.0.7
2. Loading Dataset¶
from google.colab import drive
drive.mount('/content/drive')
dataset = pd.read_csv('/content/drive/My Drive/BT4012 Fraud Analytics/Code/transaction_dataset.csv')
dataset.head()
Mounted at /content/drive
| Unnamed: 0 | Index | Address | FLAG | Avg min between sent tnx | Avg min between received tnx | Time Diff between first and last (Mins) | Sent tnx | Received Tnx | Number of Created Contracts | ... | ERC20 min val sent | ERC20 max val sent | ERC20 avg val sent | ERC20 min val sent contract | ERC20 max val sent contract | ERC20 avg val sent contract | ERC20 uniq sent token name | ERC20 uniq rec token name | ERC20 most sent token type | ERC20_most_rec_token_type | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 0 | 1 | 0x00009277775ac7d0d59eaad8fee3d10ac6c805e8 | 0 | 844.26 | 1093.71 | 704785.63 | 721 | 89 | 0 | ... | 0.000000 | 1.683100e+07 | 271779.920000 | 0.0 | 0.0 | 0.0 | 39.0 | 57.0 | Cofoundit | Numeraire |
| 1 | 1 | 2 | 0x0002b44ddb1476db43c868bd494422ee4c136fed | 0 | 12709.07 | 2958.44 | 1218216.73 | 94 | 8 | 0 | ... | 2.260809 | 2.260809e+00 | 2.260809 | 0.0 | 0.0 | 0.0 | 1.0 | 7.0 | Livepeer Token | Livepeer Token |
| 2 | 2 | 3 | 0x0002bda54cb772d040f779e88eb453cac0daa244 | 0 | 246194.54 | 2434.02 | 516729.30 | 2 | 10 | 0 | ... | 0.000000 | 0.000000e+00 | 0.000000 | 0.0 | 0.0 | 0.0 | 0.0 | 8.0 | None | XENON |
| 3 | 3 | 4 | 0x00038e6ba2fd5c09aedb96697c8d7b8fa6632e5e | 0 | 10219.60 | 15785.09 | 397555.90 | 25 | 9 | 0 | ... | 100.000000 | 9.029231e+03 | 3804.076893 | 0.0 | 0.0 | 0.0 | 1.0 | 11.0 | Raiden | XENON |
| 4 | 4 | 5 | 0x00062d1dd1afb6fb02540ddad9cdebfe568e0d89 | 0 | 36.61 | 10707.77 | 382472.42 | 4598 | 20 | 1 | ... | 0.000000 | 4.500000e+04 | 13726.659220 | 0.0 | 0.0 | 0.0 | 6.0 | 27.0 | StatusNetwork | EOS |
5 rows × 51 columns
print("The shape of the dataset is: " + str(dataset.shape))
The shape of the dataset is: (9841, 51)
dataset.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 9841 entries, 0 to 9840 Data columns (total 51 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Unnamed: 0 9841 non-null int64 1 Index 9841 non-null int64 2 Address 9841 non-null object 3 FLAG 9841 non-null int64 4 Avg min between sent tnx 9841 non-null float64 5 Avg min between received tnx 9841 non-null float64 6 Time Diff between first and last (Mins) 9841 non-null float64 7 Sent tnx 9841 non-null int64 8 Received Tnx 9841 non-null int64 9 Number of Created Contracts 9841 non-null int64 10 Unique Received From Addresses 9841 non-null int64 11 Unique Sent To Addresses 9841 non-null int64 12 min value received 9841 non-null float64 13 max value received 9841 non-null float64 14 avg val received 9841 non-null float64 15 min val sent 9841 non-null float64 16 max val sent 9841 non-null float64 17 avg val sent 9841 non-null float64 18 min value sent to contract 9841 non-null float64 19 max val sent to contract 9841 non-null float64 20 avg value sent to contract 9841 non-null float64 21 total transactions (including tnx to create contract 9841 non-null int64 22 total Ether sent 9841 non-null float64 23 total ether received 9841 non-null float64 24 total ether sent contracts 9841 non-null float64 25 total ether balance 9841 non-null float64 26 Total ERC20 tnxs 9012 non-null float64 27 ERC20 total Ether received 9012 non-null float64 28 ERC20 total ether sent 9012 non-null float64 29 ERC20 total Ether sent contract 9012 non-null float64 30 ERC20 uniq sent addr 9012 non-null float64 31 ERC20 uniq rec addr 9012 non-null float64 32 ERC20 uniq sent addr.1 9012 non-null float64 33 ERC20 uniq rec contract addr 9012 non-null float64 34 ERC20 avg time between sent tnx 9012 non-null float64 35 ERC20 avg time between rec tnx 9012 non-null float64 36 ERC20 avg time between rec 2 tnx 9012 non-null float64 37 ERC20 avg time between contract tnx 9012 non-null float64 38 ERC20 min val rec 9012 non-null float64 39 ERC20 max val rec 9012 non-null float64 40 ERC20 avg val rec 9012 non-null float64 41 ERC20 min val sent 9012 non-null float64 42 ERC20 max val sent 9012 non-null float64 43 ERC20 avg val sent 9012 non-null float64 44 ERC20 min val sent contract 9012 non-null float64 45 ERC20 max val sent contract 9012 non-null float64 46 ERC20 avg val sent contract 9012 non-null float64 47 ERC20 uniq sent token name 9012 non-null float64 48 ERC20 uniq rec token name 9012 non-null float64 49 ERC20 most sent token type 9000 non-null object 50 ERC20_most_rec_token_type 8990 non-null object dtypes: float64(39), int64(9), object(3) memory usage: 3.8+ MB
duplicate_rows = dataset[dataset.duplicated()]
num_duplicated_rows = duplicate_rows.shape[0]
print(f"The number of duplicate rows in this dataset is: {num_duplicated_rows}")
The number of duplicate rows in this dataset is: 0
# Gives total count of fraudulent transactions
fraudulent_count = dataset['FLAG'].value_counts()[1]
# Gives total count of non-fraudulent transactions
non_fraudulent_count = dataset['FLAG'].value_counts()[0]
# Required ratio
ratio = fraudulent_count/non_fraudulent_count
# Answer
print("The ratio of fraud vs non-fraud transactions is " + str(ratio) +'.')
The ratio of fraud vs non-fraud transactions is 0.2843904985643435.
3. Data Cleaning¶
Irrelevant and Errorneous Columns¶
# Drop index-related columns that are not useful
dataset.drop(columns=['Unnamed: 0', "Index", 'Address'], axis=1, inplace=True)
# Look at dataset again
dataset.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 9841 entries, 0 to 9840 Data columns (total 48 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 FLAG 9841 non-null int64 1 Avg min between sent tnx 9841 non-null float64 2 Avg min between received tnx 9841 non-null float64 3 Time Diff between first and last (Mins) 9841 non-null float64 4 Sent tnx 9841 non-null int64 5 Received Tnx 9841 non-null int64 6 Number of Created Contracts 9841 non-null int64 7 Unique Received From Addresses 9841 non-null int64 8 Unique Sent To Addresses 9841 non-null int64 9 min value received 9841 non-null float64 10 max value received 9841 non-null float64 11 avg val received 9841 non-null float64 12 min val sent 9841 non-null float64 13 max val sent 9841 non-null float64 14 avg val sent 9841 non-null float64 15 min value sent to contract 9841 non-null float64 16 max val sent to contract 9841 non-null float64 17 avg value sent to contract 9841 non-null float64 18 total transactions (including tnx to create contract 9841 non-null int64 19 total Ether sent 9841 non-null float64 20 total ether received 9841 non-null float64 21 total ether sent contracts 9841 non-null float64 22 total ether balance 9841 non-null float64 23 Total ERC20 tnxs 9012 non-null float64 24 ERC20 total Ether received 9012 non-null float64 25 ERC20 total ether sent 9012 non-null float64 26 ERC20 total Ether sent contract 9012 non-null float64 27 ERC20 uniq sent addr 9012 non-null float64 28 ERC20 uniq rec addr 9012 non-null float64 29 ERC20 uniq sent addr.1 9012 non-null float64 30 ERC20 uniq rec contract addr 9012 non-null float64 31 ERC20 avg time between sent tnx 9012 non-null float64 32 ERC20 avg time between rec tnx 9012 non-null float64 33 ERC20 avg time between rec 2 tnx 9012 non-null float64 34 ERC20 avg time between contract tnx 9012 non-null float64 35 ERC20 min val rec 9012 non-null float64 36 ERC20 max val rec 9012 non-null float64 37 ERC20 avg val rec 9012 non-null float64 38 ERC20 min val sent 9012 non-null float64 39 ERC20 max val sent 9012 non-null float64 40 ERC20 avg val sent 9012 non-null float64 41 ERC20 min val sent contract 9012 non-null float64 42 ERC20 max val sent contract 9012 non-null float64 43 ERC20 avg val sent contract 9012 non-null float64 44 ERC20 uniq sent token name 9012 non-null float64 45 ERC20 uniq rec token name 9012 non-null float64 46 ERC20 most sent token type 9000 non-null object 47 ERC20_most_rec_token_type 8990 non-null object dtypes: float64(39), int64(7), object(2) memory usage: 3.6+ MB
After inspecting the dataset using dataset.info(), we have identified potential duplicate columns due to their similar names:
First set of potential duplicates: ' ERC20 avg time between rec tnx' and ' ERC20 avg time between rec 2 tnx.'
Second set of potential duplicates: ' ERC20 uniq sent addr' and ' ERC20 uniq sent addr.1'
# Check if first set of columns are duplicates
first_set_columns = dataset[' ERC20 avg time between rec tnx'].equals(dataset[' ERC20 avg time between rec 2 tnx'])
if first_set_columns:
print(f"The columns ' ERC20 avg time between rec tnx' and ' ERC20 avg time between rec 2 tnx' are exactly the same.")
else:
print(f"The columns ' ERC20 avg time between rec tnx' and ' ERC20 avg time between rec 2 tnx' are not the same.")
# Check if second set of columns are duplicates
second_set_columns = dataset[' ERC20 uniq sent addr'].equals(dataset[' ERC20 uniq sent addr.1'])
if second_set_columns:
print(f"The columns ' ERC20 uniq sent addr' and ' ERC20 uniq sent addr.1' are exactly the same.")
else:
print(f"The columns ' ERC20 uniq sent addr' and ' ERC20 uniq sent addr.1' are not the same.")
The columns ' ERC20 avg time between rec tnx' and ' ERC20 avg time between rec 2 tnx' are exactly the same. The columns ' ERC20 uniq sent addr' and ' ERC20 uniq sent addr.1' are not the same.
As we can see, the first set of columns are duplicates, so we will be dropping one of them.
However, the second set of columns are not duplicates. Based on the column names, there seems to be a pattern. Based on our intuition, columns 29 (ERC20 uniq sent addr.1) and 30 (ERC20 uniq rec contract addr) should follow the same pattern as columns 27 (ERC20 uniq sent addr) and 28 (ERC20 uniq rec addr)
Hence, we believe that column 29 could potentially be misnamed by the author and it should be called 'ERC20 uniq sent contract addr' instead of 'ERC20 uniq sent addr.1'
To confirm our suspicion, we will check column 29 (ERC20 uniq sent contract addr) with column 26 (ERC20 total Ether sent contract). The reason for this is because if ERC20 uniq sent contract addr is 0, it means that there is no ERC20 token transactions sent to unique contract addresses, so ERC20 total Ether sent contract should be 0 as well.
# If the column is indeed ''
filtered_df = dataset[dataset[' ERC20 uniq sent addr.1'] == 0]
# Check if all corresponding values in ' ERC20 total Ether sent contract' are 0
if (filtered_df[' ERC20 total Ether sent contract'] == 0).all():
print("All values in ' ERC20 total Ether sent contract' are 0 when ' ERC20 uniq sent addr.1' is 0.")
else:
print("Not all values in ' ERC20 total Ether sent contract' are 0 when ' ERC20 uniq sent addr.1' is 0.")
filtered_df2 = dataset[dataset[' ERC20 uniq sent addr.1'] != 0]
# Check if all corresponding values in ' ERC20 total Ether sent contract' are not 0
if (filtered_df2[' ERC20 total Ether sent contract'] != 0).all():
print("All values in ' ERC20 total Ether sent contract' are not 0 when ' ERC20 uniq sent addr.1' is not 0.")
else:
print("Not all values in ' ERC20 total Ether sent contract' are not 0 when ' ERC20 uniq sent addr.1' is not 0.")
All values in ' ERC20 total Ether sent contract' are 0 when ' ERC20 uniq sent addr.1' is 0. All values in ' ERC20 total Ether sent contract' are not 0 when ' ERC20 uniq sent addr.1' is not 0.
Hence, we can assume that column 29 was misnamed by the author and it should be called 'ERC20 uniq sent contract addr' instead of 'ERC20 uniq sent addr.1' and we will be renaming the column as such.
# Rename col 29 to ERC20 uniq sent contract addr
dataset = dataset.rename(columns={' ERC20 uniq sent addr.1':'ERC20 uniq sent contract addr'})
# Drop duplicate columns
dataset.drop(columns=[' ERC20 avg time between rec 2 tnx'], axis=1, inplace=True)
# Rename target variable as Y
dataset = dataset.rename(columns={'FLAG':'Y'})
Missing values¶
# Count number of missing values for each column, sort them in descending order
top_missing_columns = dataset.isnull().sum().sort_values(ascending=False)
top_missing_columns
ERC20_most_rec_token_type 851 ERC20 most sent token type 841 ERC20 max val rec 829 ERC20 total Ether sent contract 829 ERC20 uniq sent addr 829 ERC20 uniq rec addr 829 ERC20 uniq sent contract addr 829 ERC20 uniq rec contract addr 829 ERC20 avg time between sent tnx 829 ERC20 avg time between rec tnx 829 ERC20 avg time between contract tnx 829 ERC20 min val rec 829 ERC20 avg val rec 829 ERC20 total Ether received 829 ERC20 min val sent 829 ERC20 max val sent 829 ERC20 avg val sent 829 ERC20 min val sent contract 829 ERC20 max val sent contract 829 ERC20 avg val sent contract 829 ERC20 uniq sent token name 829 ERC20 uniq rec token name 829 ERC20 total ether sent 829 Total ERC20 tnxs 829 Avg min between sent tnx 0 avg val received 0 Avg min between received tnx 0 Time Diff between first and last (Mins) 0 Sent tnx 0 Received Tnx 0 Number of Created Contracts 0 Unique Received From Addresses 0 Unique Sent To Addresses 0 min value received 0 max value received 0 min val sent 0 total ether balance 0 max val sent 0 avg val sent 0 min value sent to contract 0 max val sent to contract 0 avg value sent to contract 0 total transactions (including tnx to create contract 0 total Ether sent 0 total ether received 0 total ether sent contracts 0 Y 0 dtype: int64
# Inspect values of ERC20 most sent token type to decide how to impute values
dataset[' ERC20 most sent token type'].value_counts()
0 4399
None 1856
1191
EOS 138
OmiseGO 137
...
Arcona Distribution Contract 1
HeroCoin 1
Cindicator 1
UnlimitedIP Token 1
eosDAC Community Owned EOS Block Producer ERC20 Tokens 1
Name: ERC20 most sent token type, Length: 305, dtype: int64
We see that the highest count of ERC20 most sent token type is a token called '0' at a count of 4399, followed by '' at a count of 1856. From research, these tokens do not exist. It is also unlikely that '0' and '' are actual token names, due to their lack of distinctiveness. We can categorize this column into observations with meaningful token names vs not meaningful names (such as '0', '' and missing values).
# Mapping function to map 'ERC 20 most sent token type'
def map_token_type(token_name):
if (token_name in ['0', "''"]):
return 0
else:
return 1
# Replace empty values with 0
dataset[' ERC20 most sent token type'].fillna(0, inplace=True)
# Map remainder of the values
dataset[' ERC20 most sent token type'] = dataset[' ERC20 most sent token type'].apply(map_token_type)
# Rename column
dataset = dataset.rename(columns={' ERC20 most sent token type': 'ERC20_most_sent_token_valid_name'})
# Nature of column ' ERC20_most_rec_token_type' is similar to previous case, apply same transformation
# Replace empty values with 0
dataset[' ERC20_most_rec_token_type'].fillna(0, inplace=True)
# Map remainder of the values
dataset[' ERC20_most_rec_token_type'] = dataset[' ERC20_most_rec_token_type'].apply(map_token_type)
# Rename column
dataset = dataset.rename(columns={' ERC20_most_rec_token_type': 'ERC20_most_rec_token_valid_name'})
dataset.isnull().sum().sort_values(ascending=False)
Total ERC20 tnxs 829 ERC20 max val rec 829 ERC20 total Ether sent contract 829 ERC20 uniq sent addr 829 ERC20 uniq rec addr 829 ERC20 uniq sent contract addr 829 ERC20 uniq rec contract addr 829 ERC20 avg time between sent tnx 829 ERC20 avg time between rec tnx 829 ERC20 avg time between contract tnx 829 ERC20 min val rec 829 ERC20 avg val rec 829 ERC20 total Ether received 829 ERC20 min val sent 829 ERC20 max val sent 829 ERC20 avg val sent 829 ERC20 min val sent contract 829 ERC20 max val sent contract 829 ERC20 avg val sent contract 829 ERC20 uniq sent token name 829 ERC20 uniq rec token name 829 ERC20 total ether sent 829 ERC20_most_sent_token_valid_name 0 Y 0 Avg min between sent tnx 0 avg val received 0 Avg min between received tnx 0 Time Diff between first and last (Mins) 0 Sent tnx 0 Received Tnx 0 Number of Created Contracts 0 Unique Received From Addresses 0 Unique Sent To Addresses 0 min value received 0 max value received 0 min val sent 0 total ether balance 0 max val sent 0 avg val sent 0 min value sent to contract 0 max val sent to contract 0 avg value sent to contract 0 total transactions (including tnx to create contract 0 total Ether sent 0 total ether received 0 total ether sent contracts 0 ERC20_most_rec_token_valid_name 0 dtype: int64
For the remainder of the columns with missing values, we see that the count of rows with missing values are consistently 829. This is a sign that they are missing not at random (MNAR) because there is a systematic pattern to the missingness that affects certain columns only, suggesting that it may be related to the variables itself
dataset_with_na_rows = dataset[dataset.isnull().any(axis = 1)]
dataset_with_na_rows[[' ERC20 max val rec', ' ERC20 uniq sent addr', ' ERC20 uniq rec addr', 'ERC20 uniq sent contract addr', ' ERC20 uniq rec contract addr',
' ERC20 avg time between sent tnx', ' ERC20 avg time between rec tnx', ' ERC20 avg time between contract tnx', ' ERC20 min val rec',
' ERC20 avg val rec', ' ERC20 total ether sent', ' ERC20 min val sent', ' ERC20 max val sent' , ' ERC20 avg val sent',
' ERC20 min val sent contract', ' ERC20 max val sent contract',' ERC20 avg val sent contract', ' ERC20 uniq sent token name', ' ERC20 uniq rec token name',
' ERC20 total Ether sent contract', ' ERC20 total Ether received',' Total ERC20 tnxs']]
| ERC20 max val rec | ERC20 uniq sent addr | ERC20 uniq rec addr | ERC20 uniq sent contract addr | ERC20 uniq rec contract addr | ERC20 avg time between sent tnx | ERC20 avg time between rec tnx | ERC20 avg time between contract tnx | ERC20 min val rec | ERC20 avg val rec | ... | ERC20 max val sent | ERC20 avg val sent | ERC20 min val sent contract | ERC20 max val sent contract | ERC20 avg val sent contract | ERC20 uniq sent token name | ERC20 uniq rec token name | ERC20 total Ether sent contract | ERC20 total Ether received | Total ERC20 tnxs | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 7662 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 7666 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 7675 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 7676 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 7678 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 9831 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 9833 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 9834 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 9835 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 9839 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
829 rows × 22 columns
print("Number of rows with missing values for fraud transactions: ",dataset_with_na_rows[dataset_with_na_rows["Y"] == 1].shape[0])
Number of rows with missing values for fraud transactions: 829
# Fraud transactions with no ERC tokens are not identified, as min transaction = 1 instead of 0
min(dataset[dataset["Y"] == 1][' Total ERC20 tnxs'].dropna())
1.0
For the remainder of the columns with missing values, we see that all missing values for all columns are in the same observations, and that they are only for fraud cases. We identified that these observations do not involve ERC20 tokens and the columns with missing values should be imputed with the value 0.
to_be_imputed = [' ERC20 max val rec', ' ERC20 uniq sent addr', ' ERC20 uniq rec addr', 'ERC20 uniq sent contract addr', ' ERC20 uniq rec contract addr',
' ERC20 avg time between sent tnx', ' ERC20 avg time between rec tnx', ' ERC20 avg time between contract tnx', ' ERC20 min val rec',
' ERC20 avg val rec', ' ERC20 total ether sent', ' ERC20 min val sent', ' ERC20 max val sent' , ' ERC20 avg val sent',
' ERC20 min val sent contract', ' ERC20 max val sent contract',' ERC20 avg val sent contract', ' ERC20 uniq sent token name', ' ERC20 uniq rec token name',
' ERC20 total Ether sent contract', ' ERC20 total Ether received',' Total ERC20 tnxs']
for col in to_be_imputed:
dataset[col].fillna(0, inplace=True)
# Inspect resultant df to check that there are not more missing values
dataset.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 9841 entries, 0 to 9840 Data columns (total 47 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Y 9841 non-null int64 1 Avg min between sent tnx 9841 non-null float64 2 Avg min between received tnx 9841 non-null float64 3 Time Diff between first and last (Mins) 9841 non-null float64 4 Sent tnx 9841 non-null int64 5 Received Tnx 9841 non-null int64 6 Number of Created Contracts 9841 non-null int64 7 Unique Received From Addresses 9841 non-null int64 8 Unique Sent To Addresses 9841 non-null int64 9 min value received 9841 non-null float64 10 max value received 9841 non-null float64 11 avg val received 9841 non-null float64 12 min val sent 9841 non-null float64 13 max val sent 9841 non-null float64 14 avg val sent 9841 non-null float64 15 min value sent to contract 9841 non-null float64 16 max val sent to contract 9841 non-null float64 17 avg value sent to contract 9841 non-null float64 18 total transactions (including tnx to create contract 9841 non-null int64 19 total Ether sent 9841 non-null float64 20 total ether received 9841 non-null float64 21 total ether sent contracts 9841 non-null float64 22 total ether balance 9841 non-null float64 23 Total ERC20 tnxs 9841 non-null float64 24 ERC20 total Ether received 9841 non-null float64 25 ERC20 total ether sent 9841 non-null float64 26 ERC20 total Ether sent contract 9841 non-null float64 27 ERC20 uniq sent addr 9841 non-null float64 28 ERC20 uniq rec addr 9841 non-null float64 29 ERC20 uniq sent contract addr 9841 non-null float64 30 ERC20 uniq rec contract addr 9841 non-null float64 31 ERC20 avg time between sent tnx 9841 non-null float64 32 ERC20 avg time between rec tnx 9841 non-null float64 33 ERC20 avg time between contract tnx 9841 non-null float64 34 ERC20 min val rec 9841 non-null float64 35 ERC20 max val rec 9841 non-null float64 36 ERC20 avg val rec 9841 non-null float64 37 ERC20 min val sent 9841 non-null float64 38 ERC20 max val sent 9841 non-null float64 39 ERC20 avg val sent 9841 non-null float64 40 ERC20 min val sent contract 9841 non-null float64 41 ERC20 max val sent contract 9841 non-null float64 42 ERC20 avg val sent contract 9841 non-null float64 43 ERC20 uniq sent token name 9841 non-null float64 44 ERC20 uniq rec token name 9841 non-null float64 45 ERC20_most_sent_token_valid_name 9841 non-null int64 46 ERC20_most_rec_token_valid_name 9841 non-null int64 dtypes: float64(38), int64(9) memory usage: 3.5 MB
There are no more missing values. Our resulting dataframe has 45 columns and 9841 observations.
Low Variability¶
# Check for less or zero variance
numeric_columns = dataset.select_dtypes(include=['int64', 'float64'])
# Calculate the variance for each numeric column
for column in numeric_columns.columns:
column_variance = numeric_columns[column].var()
print(f"Variance in '{column}': {column_variance}")
Variance in 'Y': 0.17241103050701273 Variance in 'Avg min between sent tnx': 461671829.791403 Variance in 'Avg min between received tnx': 532765558.15345657 Variance in 'Time Diff between first and last (Mins)': 104288903881.01013 Variance in 'Sent tnx': 573391.7611902419 Variance in 'Received Tnx': 885173.4141139443 Variance in 'Number of Created Contracts': 20006.852947970285 Variance in 'Unique Received From Addresses': 89174.56869358987 Variance in 'Unique Sent To Addresses': 69601.20869607243 Variance in 'min value received': 106229.80395792934 Variance in 'max value received ': 169229437.82202128 Variance in 'avg val received': 8323237.903808778 Variance in 'min val sent': 19212.643822774065 Variance in 'max val sent': 43946460.26083183 Variance in 'avg val sent': 57159.349305730815 Variance in 'min value sent to contract': 5.0803714012142645e-08 Variance in 'max val sent to contract': 2.6606518910266735e-07 Variance in 'avg value sent to contract': 1.0460961450311993e-07 Variance in 'total transactions (including tnx to create contract': 1828996.6146157237 Variance in 'total Ether sent': 128395165629.00435 Variance in 'total ether received': 132645116719.714 Variance in 'total ether sent contracts': 2.660625045367055e-07 Variance in 'total ether balance': 58770085659.43741 Variance in ' Total ERC20 tnxs': 183510.17473757546 Variance in ' ERC20 total Ether received': 1.017063241588219e+20 Variance in ' ERC20 total ether sent': 1.2759509660733422e+18 Variance in ' ERC20 total Ether sent contract': 34396748.246553466 Variance in ' ERC20 uniq sent addr': 10147.234678398734 Variance in ' ERC20 uniq rec addr': 6134.738618051409 Variance in 'ERC20 uniq sent contract addr': 0.0039534905815474175 Variance in ' ERC20 uniq rec contract addr': 274.2391314791361 Variance in ' ERC20 avg time between sent tnx': 0.0 Variance in ' ERC20 avg time between rec tnx': 0.0 Variance in ' ERC20 avg time between contract tnx': 0.0 Variance in ' ERC20 min val rec': 261048823.60781482 Variance in ' ERC20 max val rec': 1.016835284246567e+20 Variance in ' ERC20 avg val rec': 4.1985986381545176e+16 Variance in ' ERC20 min val sent': 1016498764325.933 Variance in ' ERC20 max val sent': 1.274901251345477e+18 Variance in ' ERC20 avg val sent': 3.2037375380588045e+17 Variance in ' ERC20 min val sent contract': 0.0 Variance in ' ERC20 max val sent contract': 0.0 Variance in ' ERC20 avg val sent contract': 0.0 Variance in ' ERC20 uniq sent token name': 41.688188952309204 Variance in ' ERC20 uniq rec token name': 256.537539871766 Variance in 'ERC20_most_sent_token_valid_name': 0.24722766788688105 Variance in 'ERC20_most_rec_token_valid_name': 0.24721690736366767
From above, we see that the following variables have zero variance
- ' ERC20 avg time between sent tnx'
- ' ERC20 avg time between rec tnx'
- ' ERC20 avg time between contract tnx'
- ' ERC20 min val sent contract'
- ' ERC20 max val sent contract'
- ' ERC20 avg val sent contract'
And the following variables have extremely low variance, close to zero:
- 'min value sent to contract'
- 'max val sent to contract'
- 'avg value sent to contract'
- 'total ether sent contracts'
- 'ERC20 uniq sent contract addr'
We can see that these variables are related to smart contracts.
# Drop columns with zero or extremely low variance
dataset.drop(columns=[' ERC20 avg time between sent tnx', ' ERC20 avg time between rec tnx', ' ERC20 avg time between contract tnx',
' ERC20 min val sent contract', ' ERC20 max val sent contract', ' ERC20 avg val sent contract', 'min value sent to contract', 'max val sent to contract',
'avg value sent to contract', 'total ether sent contracts', 'ERC20 uniq sent contract addr'], axis=1, inplace=True)
Outliers¶
We did consider dropping outliers on the numerical columns, based on 1.5IQR. However, many of these numerical columns contain 0 or values that are extraordinarily high. For example, for the ERC20 avg val sent column, there are a huge majority of rows whose values are 0 as seen below, but there are also values that are very huge. If we were to apply the IQR method, we will be dropping almost all rows, essentially removing that entire column.
The '0's in this case do hold a significance, showing that no ERC20 was sent from the account. Huge outliers (i.e. huge values) might also be an indicator of suspicious behaviour, so we decided not to remove outliers in this case.
dataset[' ERC20 avg val sent'].value_counts()
0.000000 8373
100.000000 121
0.000001 11
10000.000000 5
1.000000 4
...
0.938440 1
0.452260 1
586.270104 1
112.555556 1
18.571500 1
Name: ERC20 avg val sent, Length: 1309, dtype: int64
4. EDA¶
dataset.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 9841 entries, 0 to 9840 Data columns (total 36 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Y 9841 non-null int64 1 Avg min between sent tnx 9841 non-null float64 2 Avg min between received tnx 9841 non-null float64 3 Time Diff between first and last (Mins) 9841 non-null float64 4 Sent tnx 9841 non-null int64 5 Received Tnx 9841 non-null int64 6 Number of Created Contracts 9841 non-null int64 7 Unique Received From Addresses 9841 non-null int64 8 Unique Sent To Addresses 9841 non-null int64 9 min value received 9841 non-null float64 10 max value received 9841 non-null float64 11 avg val received 9841 non-null float64 12 min val sent 9841 non-null float64 13 max val sent 9841 non-null float64 14 avg val sent 9841 non-null float64 15 total transactions (including tnx to create contract 9841 non-null int64 16 total Ether sent 9841 non-null float64 17 total ether received 9841 non-null float64 18 total ether balance 9841 non-null float64 19 Total ERC20 tnxs 9841 non-null float64 20 ERC20 total Ether received 9841 non-null float64 21 ERC20 total ether sent 9841 non-null float64 22 ERC20 total Ether sent contract 9841 non-null float64 23 ERC20 uniq sent addr 9841 non-null float64 24 ERC20 uniq rec addr 9841 non-null float64 25 ERC20 uniq rec contract addr 9841 non-null float64 26 ERC20 min val rec 9841 non-null float64 27 ERC20 max val rec 9841 non-null float64 28 ERC20 avg val rec 9841 non-null float64 29 ERC20 min val sent 9841 non-null float64 30 ERC20 max val sent 9841 non-null float64 31 ERC20 avg val sent 9841 non-null float64 32 ERC20 uniq sent token name 9841 non-null float64 33 ERC20 uniq rec token name 9841 non-null float64 34 ERC20_most_sent_token_valid_name 9841 non-null int64 35 ERC20_most_rec_token_valid_name 9841 non-null int64 dtypes: float64(27), int64(9) memory usage: 2.7 MB
# Descriptive statistics for all numerical variables
numerical_variables = dataset.select_dtypes(include=['int64', 'float64'])
numerical_variables = numerical_variables.columns
descriptive_stats = dataset[numerical_variables].describe().loc[['mean', 'std', 'min','50%','max']].transpose()
descriptive_stats.columns = ['mean', 'std','min', 'median','max']
descriptive_stats['variance'] = descriptive_stats['std']**2
descriptive_stats['mode'] = dataset[numerical_variables].mode().transpose()[0]
descriptive_stats['mad'] = [stats.median_abs_deviation(dataset[x]) for x in numerical_variables]
descriptive_stats['kurtosis'] = [stats.kurtosis(dataset[x]) for x in numerical_variables]
descriptive_stats['skewness'] = [stats.skew(dataset[x]) for x in numerical_variables]
descriptive_stats
| mean | std | min | median | max | variance | mode | mad | kurtosis | skewness | |
|---|---|---|---|---|---|---|---|---|---|---|
| Y | 2.214206e-01 | 4.152241e-01 | 0.00 | 0.000000 | 1.000000e+00 | 1.724110e-01 | 0.0 | 0.000000 | -0.199318 | 1.341895 |
| Avg min between sent tnx | 5.086879e+03 | 2.148655e+04 | 0.00 | 17.340000 | 4.302877e+05 | 4.616718e+08 | 0.0 | 17.340000 | 95.032225 | 8.418716 |
| Avg min between received tnx | 8.004851e+03 | 2.308171e+04 | 0.00 | 509.770000 | 4.821755e+05 | 5.327656e+08 | 0.0 | 509.770000 | 67.971030 | 6.744270 |
| Time Diff between first and last (Mins) | 2.183333e+05 | 3.229379e+05 | 0.00 | 46637.030000 | 1.954861e+06 | 1.042889e+11 | 0.0 | 46637.030000 | 2.953471 | 1.809701 |
| Sent tnx | 1.159317e+02 | 7.572264e+02 | 0.00 | 3.000000 | 1.000000e+04 | 5.733918e+05 | 0.0 | 3.000000 | 120.634073 | 10.482946 |
| Received Tnx | 1.637009e+02 | 9.408366e+02 | 0.00 | 4.000000 | 1.000000e+04 | 8.851734e+05 | 1.0 | 3.000000 | 82.576773 | 8.820039 |
| Number of Created Contracts | 3.729702e+00 | 1.414456e+02 | 0.00 | 0.000000 | 9.995000e+03 | 2.000685e+04 | 0.0 | 0.000000 | 3112.610930 | 51.712336 |
| Unique Received From Addresses | 3.036094e+01 | 2.986211e+02 | 0.00 | 2.000000 | 9.999000e+03 | 8.917457e+04 | 1.0 | 1.000000 | 418.567105 | 18.113345 |
| Unique Sent To Addresses | 2.584016e+01 | 2.638204e+02 | 0.00 | 2.000000 | 9.287000e+03 | 6.960121e+04 | 1.0 | 1.000000 | 410.105291 | 18.351527 |
| min value received | 4.384515e+01 | 3.259291e+02 | 0.00 | 0.095856 | 1.000000e+04 | 1.062298e+05 | 0.0 | 0.095856 | 657.800891 | 23.292332 |
| max value received | 5.231525e+02 | 1.300882e+04 | 0.00 | 6.000000 | 8.000000e+05 | 1.692294e+08 | 101.0 | 5.978713 | 2386.259591 | 46.416606 |
| avg val received | 1.007117e+02 | 2.885002e+03 | 0.00 | 1.729730 | 2.836188e+05 | 8.323238e+06 | 101.0 | 1.713606 | 9476.688098 | 96.498968 |
| min val sent | 4.800090e+00 | 1.386097e+02 | 0.00 | 0.049126 | 1.200000e+04 | 1.921264e+04 | 0.0 | 0.049126 | 5950.739030 | 73.415556 |
| max val sent | 3.146173e+02 | 6.629213e+03 | 0.00 | 4.999380 | 5.200000e+05 | 4.394646e+07 | 0.0 | 4.999380 | 4190.770883 | 59.824593 |
| avg val sent | 4.475573e+01 | 2.390802e+02 | 0.00 | 1.606000 | 1.200000e+04 | 5.715935e+04 | 0.0 | 1.606000 | 1036.761326 | 25.527195 |
| total transactions (including tnx to create contract | 2.833624e+02 | 1.352404e+03 | 0.00 | 8.000000 | 1.999500e+04 | 1.828997e+06 | 4.0 | 6.000000 | 50.006440 | 6.848002 |
| total Ether sent | 1.016092e+04 | 3.583227e+05 | 0.00 | 12.486800 | 2.858096e+07 | 1.283952e+11 | 0.0 | 12.486800 | 4511.035091 | 62.353731 |
| total ether received | 1.163832e+04 | 3.642048e+05 | 0.00 | 30.529634 | 2.858159e+07 | 1.326451e+11 | 101.0 | 30.518634 | 4151.038376 | 58.786322 |
| total ether balance | 1.477395e+03 | 2.424254e+05 | -15605352.04 | 0.001722 | 1.428864e+07 | 5.877009e+10 | 0.0 | 0.002324 | 3134.128893 | -1.205078 |
| Total ERC20 tnxs | 3.320150e+01 | 4.283809e+02 | 0.00 | 0.000000 | 1.000100e+04 | 1.835102e+05 | 0.0 | 0.000000 | 457.591042 | 20.827358 |
| ERC20 total Ether received | 1.187015e+08 | 1.008496e+10 | 0.00 | 0.000000 | 1.000020e+12 | 1.017063e+20 | 0.0 | 0.000000 | 9818.588541 | 99.056107 |
| ERC20 total ether sent | 1.270022e+07 | 1.129580e+09 | 0.00 | 0.000000 | 1.120000e+11 | 1.275951e+18 | 0.0 | 0.000000 | 9815.753901 | 99.035567 |
| ERC20 total Ether sent contract | 1.015938e+02 | 5.864874e+03 | 0.00 | 0.000000 | 4.160000e+05 | 3.439675e+07 | 0.0 | 0.000000 | 3969.297666 | 61.938745 |
| ERC20 uniq sent addr | 5.163093e+00 | 1.007335e+02 | 0.00 | 0.000000 | 6.582000e+03 | 1.014723e+04 | 0.0 | 0.000000 | 2287.864646 | 42.468140 |
| ERC20 uniq rec addr | 6.958439e+00 | 7.832457e+01 | 0.00 | 0.000000 | 4.293000e+03 | 6.134739e+03 | 0.0 | 0.000000 | 1913.320348 | 39.251875 |
| ERC20 uniq rec contract addr | 4.488975e+00 | 1.656017e+01 | 0.00 | 0.000000 | 7.820000e+02 | 2.742391e+02 | 0.0 | 0.000000 | 592.948783 | 16.962601 |
| ERC20 min val rec | 4.447068e+02 | 1.615701e+04 | 0.00 | 0.000000 | 9.900000e+05 | 2.610488e+08 | 0.0 | 0.000000 | 3040.139062 | 52.802317 |
| ERC20 max val rec | 1.147012e+08 | 1.008383e+10 | 0.00 | 0.000000 | 1.000000e+12 | 1.016835e+20 | 0.0 | 0.000000 | 9822.363770 | 99.084581 |
| ERC20 avg val rec | 3.980082e+06 | 2.049048e+08 | 0.00 | 0.000000 | 1.724181e+10 | 4.198599e+16 | 0.0 | 0.000000 | 5787.507651 | 74.211009 |
| ERC20 min val sent | 1.075218e+04 | 1.008216e+06 | 0.00 | 0.000000 | 1.000000e+08 | 1.016499e+12 | 0.0 | 0.000000 | 9829.151022 | 99.135173 |
| ERC20 max val sent | 1.193780e+07 | 1.129115e+09 | 0.00 | 0.000000 | 1.120000e+11 | 1.274901e+18 | 0.0 | 0.000000 | 9832.192507 | 99.158044 |
| ERC20 avg val sent | 5.786132e+06 | 5.660157e+08 | 0.00 | 0.000000 | 5.614756e+10 | 3.203738e+17 | 0.0 | 0.000000 | 9834.386402 | 99.174560 |
| ERC20 uniq sent token name | 1.268265e+00 | 6.456639e+00 | 0.00 | 0.000000 | 2.130000e+02 | 4.168819e+01 | 0.0 | 0.000000 | 275.797972 | 12.841933 |
| ERC20 uniq rec token name | 4.420079e+00 | 1.601679e+01 | 0.00 | 0.000000 | 7.370000e+02 | 2.565375e+02 | 0.0 | 0.000000 | 541.546339 | 16.138336 |
| ERC20_most_sent_token_valid_name | 5.528910e-01 | 4.972199e-01 | 0.00 | 1.000000 | 1.000000e+00 | 2.472277e-01 | 1.0 | 0.000000 | -1.954734 | -0.212758 |
| ERC20_most_rec_token_valid_name | 5.529926e-01 | 4.972091e-01 | 0.00 | 1.000000 | 1.000000e+00 | 2.472169e-01 | 1.0 | 0.000000 | -1.954558 | -0.213171 |
From the descriptive statistics, we find many columns with mode 0 and MAD 0, while still having high std dev. We should also drop the columns where close to all observations are 0 and a very small number of outliers exist (<1%), as an unlucky train-val-test split may mean that the model will not capture the feature effectively. However, we will not drop the feature if the majority of non 0 are in the fraud class, as it means that it may be an indicator of fraud.
print("Col name", "Number of non-0s")
cols_with_0_mad = descriptive_stats[descriptive_stats['mad'] == 0].index
for x in cols_with_0_mad:
print(x,dataset[dataset[x] != 0].shape[0])
Col name Number of non-0s Y 2179 Number of Created Contracts 1356 Total ERC20 tnxs 4613 ERC20 total Ether received 4572 ERC20 total ether sent 1566 ERC20 total Ether sent contract 28 ERC20 uniq sent addr 1566 ERC20 uniq rec addr 4573 ERC20 uniq rec contract addr 4573 ERC20 min val rec 2275 ERC20 max val rec 4425 ERC20 avg val rec 4420 ERC20 min val sent 767 ERC20 max val sent 1477 ERC20 avg val sent 1468 ERC20 uniq sent token name 1566 ERC20 uniq rec token name 4573 ERC20_most_sent_token_valid_name 5441 ERC20_most_rec_token_valid_name 5442
dataset[dataset[' ERC20 total Ether sent contract'] > 0][["Y"]].value_counts()
Y 0 24 1 4 dtype: int64
We identify that the column 'ERC20 total Ether sent contract' should be dropped as only 28 out of 9841 (or 0.295%) entries are non 0. For the remaining columns, the number of non 0 are sizable, and thus should not be dropped.
dataset.drop(' ERC20 total Ether sent contract', axis = 1, inplace = True)
Fraud Ratio¶
# Plot graph to show the fraud distribution
pie, ax = plt.subplots(figsize=[15, 10])
labels = ['Non-fraud', 'Fraud']
plt.pie(x = dataset['Y'].value_counts(), autopct='%.2f%%', labels=labels)
plt.title('Fraud distribution')
plt.show()
Feature Correlation and Treament¶
# Select only the specified columns above
num_vs = dataset.select_dtypes(include=['int64', 'float64'])
# Drop last two binary columns
num_vs.drop(columns=['ERC20_most_sent_token_valid_name', 'ERC20_most_rec_token_valid_name'], axis=1, inplace=True)
correlation = num_vs.corr()
# Apply a color gradient, map green background and set 2 decimal places
correlation_styled = correlation.style.background_gradient(cmap='Greens').format("{:.2f}")
correlation_styled
| Y | Avg min between sent tnx | Avg min between received tnx | Time Diff between first and last (Mins) | Sent tnx | Received Tnx | Number of Created Contracts | Unique Received From Addresses | Unique Sent To Addresses | min value received | max value received | avg val received | min val sent | max val sent | avg val sent | total transactions (including tnx to create contract | total Ether sent | total ether received | total ether balance | Total ERC20 tnxs | ERC20 total Ether received | ERC20 total ether sent | ERC20 uniq sent addr | ERC20 uniq rec addr | ERC20 uniq rec contract addr | ERC20 min val rec | ERC20 max val rec | ERC20 avg val rec | ERC20 min val sent | ERC20 max val sent | ERC20 avg val sent | ERC20 uniq sent token name | ERC20 uniq rec token name | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Y | 1.00 | -0.03 | -0.12 | -0.27 | -0.08 | -0.08 | -0.01 | -0.03 | -0.05 | -0.02 | -0.02 | -0.01 | 0.01 | -0.02 | -0.06 | -0.10 | -0.01 | -0.02 | -0.00 | -0.03 | -0.01 | 0.02 | -0.03 | -0.03 | -0.06 | 0.00 | -0.01 | 0.00 | 0.02 | 0.02 | 0.02 | -0.03 | -0.06 |
| Avg min between sent tnx | -0.03 | 1.00 | 0.06 | 0.21 | -0.03 | -0.04 | -0.01 | -0.02 | -0.02 | -0.01 | -0.01 | -0.00 | -0.00 | -0.01 | 0.00 | -0.04 | -0.01 | -0.01 | -0.00 | -0.01 | -0.00 | -0.00 | -0.01 | 0.00 | 0.05 | 0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 0.00 | 0.05 |
| Avg min between received tnx | -0.12 | 0.06 | 1.00 | 0.30 | -0.04 | -0.05 | -0.01 | -0.03 | -0.03 | -0.05 | -0.01 | -0.01 | -0.01 | -0.01 | -0.04 | -0.06 | -0.01 | -0.01 | -0.00 | -0.02 | -0.00 | -0.00 | -0.01 | -0.01 | -0.01 | -0.01 | -0.00 | -0.01 | -0.00 | -0.00 | -0.00 | -0.02 | -0.01 |
| Time Diff between first and last (Mins) | -0.27 | 0.21 | 0.30 | 1.00 | 0.15 | 0.15 | -0.00 | 0.04 | 0.07 | -0.08 | -0.00 | -0.01 | -0.01 | 0.01 | -0.05 | 0.19 | 0.01 | 0.01 | 0.00 | 0.08 | 0.05 | -0.00 | 0.04 | 0.08 | 0.33 | -0.01 | 0.05 | 0.05 | -0.01 | -0.01 | -0.01 | 0.27 | 0.33 |
| Sent tnx | -0.08 | -0.03 | -0.04 | 0.15 | 1.00 | 0.20 | 0.32 | 0.13 | 0.67 | 0.02 | 0.10 | 0.14 | -0.00 | 0.23 | 0.03 | 0.73 | 0.24 | 0.16 | -0.13 | 0.38 | 0.01 | -0.00 | 0.36 | 0.30 | 0.22 | -0.00 | 0.00 | 0.01 | -0.00 | -0.00 | -0.00 | 0.08 | 0.22 |
| Received Tnx | -0.08 | -0.04 | -0.05 | 0.15 | 0.20 | 1.00 | -0.00 | 0.65 | 0.16 | -0.02 | 0.22 | -0.00 | 0.09 | 0.10 | 0.13 | 0.81 | 0.13 | 0.24 | 0.16 | 0.12 | 0.02 | -0.00 | 0.04 | 0.14 | 0.20 | -0.00 | 0.02 | 0.02 | -0.00 | -0.00 | -0.00 | 0.05 | 0.21 |
| Number of Created Contracts | -0.01 | -0.01 | -0.01 | -0.00 | 0.32 | -0.00 | 1.00 | -0.00 | 0.08 | -0.00 | -0.00 | -0.00 | -0.00 | 0.14 | -0.00 | 0.28 | 0.02 | -0.00 | -0.04 | 0.25 | 0.00 | 0.00 | 0.15 | 0.19 | 0.03 | -0.00 | 0.00 | 0.00 | -0.00 | 0.00 | -0.00 | 0.01 | 0.03 |
| Unique Received From Addresses | -0.03 | -0.02 | -0.03 | 0.04 | 0.13 | 0.65 | -0.00 | 1.00 | 0.16 | -0.01 | 0.18 | -0.00 | 0.30 | 0.06 | 0.23 | 0.52 | 0.03 | 0.12 | 0.14 | 0.06 | 0.00 | 0.00 | 0.05 | 0.08 | 0.15 | -0.00 | 0.00 | 0.00 | -0.00 | -0.00 | -0.00 | 0.04 | 0.15 |
| Unique Sent To Addresses | -0.05 | -0.02 | -0.03 | 0.07 | 0.67 | 0.16 | 0.08 | 0.16 | 1.00 | 0.07 | 0.15 | 0.21 | -0.00 | 0.20 | 0.02 | 0.50 | 0.16 | 0.09 | -0.11 | 0.15 | 0.01 | 0.00 | 0.12 | 0.18 | 0.24 | -0.00 | 0.00 | 0.01 | -0.00 | -0.00 | -0.00 | 0.09 | 0.24 |
| min value received | -0.02 | -0.01 | -0.05 | -0.08 | 0.02 | -0.02 | -0.00 | -0.01 | 0.07 | 1.00 | 0.03 | 0.12 | 0.12 | 0.02 | 0.27 | -0.00 | -0.00 | -0.00 | -0.00 | -0.01 | -0.00 | -0.00 | -0.01 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.03 | 0.00 |
| max value received | -0.02 | -0.01 | -0.01 | -0.00 | 0.10 | 0.22 | -0.00 | 0.18 | 0.15 | 0.03 | 1.00 | 0.62 | 0.00 | 0.14 | 0.04 | 0.21 | 0.11 | 0.30 | 0.28 | 0.03 | 0.01 | -0.00 | 0.00 | 0.04 | 0.18 | -0.00 | 0.00 | 0.00 | -0.00 | -0.00 | -0.00 | 0.01 | 0.18 |
| avg val received | -0.01 | -0.00 | -0.01 | -0.01 | 0.14 | -0.00 | -0.00 | -0.00 | 0.21 | 0.12 | 0.62 | 1.00 | 0.01 | 0.13 | 0.07 | 0.08 | 0.16 | 0.06 | -0.14 | 0.01 | 0.00 | -0.00 | 0.00 | 0.04 | 0.20 | -0.00 | 0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 0.01 | 0.20 |
| min val sent | 0.01 | -0.00 | -0.01 | -0.01 | -0.00 | 0.09 | -0.00 | 0.30 | -0.00 | 0.12 | 0.00 | 0.01 | 1.00 | 0.02 | 0.59 | 0.06 | 0.00 | 0.00 | 0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.01 | -0.00 |
| max val sent | -0.02 | -0.01 | -0.01 | 0.01 | 0.23 | 0.10 | 0.14 | 0.06 | 0.20 | 0.02 | 0.14 | 0.13 | 0.02 | 1.00 | 0.18 | 0.21 | 0.47 | 0.09 | -0.56 | 0.13 | 0.03 | 0.00 | 0.05 | 0.20 | 0.17 | -0.00 | 0.03 | 0.03 | -0.00 | -0.00 | -0.00 | 0.02 | 0.17 |
| avg val sent | -0.06 | 0.00 | -0.04 | -0.05 | 0.03 | 0.13 | -0.00 | 0.23 | 0.02 | 0.27 | 0.04 | 0.07 | 0.59 | 0.18 | 1.00 | 0.10 | 0.20 | 0.17 | -0.05 | 0.02 | 0.02 | -0.00 | -0.01 | 0.01 | 0.05 | -0.00 | 0.02 | 0.01 | -0.00 | -0.00 | -0.00 | -0.02 | 0.05 |
| total transactions (including tnx to create contract | -0.10 | -0.04 | -0.06 | 0.19 | 0.73 | 0.81 | 0.28 | 0.52 | 0.50 | -0.00 | 0.21 | 0.08 | 0.06 | 0.21 | 0.10 | 1.00 | 0.23 | 0.25 | 0.03 | 0.32 | 0.02 | -0.00 | 0.25 | 0.28 | 0.27 | -0.00 | 0.01 | 0.02 | -0.00 | -0.00 | -0.00 | 0.08 | 0.27 |
| total Ether sent | -0.01 | -0.01 | -0.01 | 0.01 | 0.24 | 0.13 | 0.02 | 0.03 | 0.16 | -0.00 | 0.11 | 0.16 | 0.00 | 0.47 | 0.20 | 0.23 | 1.00 | 0.77 | -0.31 | 0.07 | 0.00 | 0.00 | 0.01 | 0.04 | 0.09 | -0.00 | 0.00 | 0.00 | -0.00 | -0.00 | -0.00 | 0.01 | 0.09 |
| total ether received | -0.02 | -0.01 | -0.01 | 0.01 | 0.16 | 0.24 | -0.00 | 0.12 | 0.09 | -0.00 | 0.30 | 0.06 | 0.00 | 0.09 | 0.17 | 0.25 | 0.77 | 1.00 | 0.36 | 0.07 | 0.00 | -0.00 | 0.00 | 0.03 | 0.07 | -0.00 | 0.00 | 0.00 | -0.00 | -0.00 | -0.00 | 0.00 | 0.07 |
| total ether balance | -0.00 | -0.00 | -0.00 | 0.00 | -0.13 | 0.16 | -0.04 | 0.14 | -0.11 | -0.00 | 0.28 | -0.14 | 0.00 | -0.56 | -0.05 | 0.03 | -0.31 | 0.36 | 1.00 | -0.01 | -0.00 | -0.00 | -0.01 | -0.02 | -0.02 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.01 | -0.02 |
| Total ERC20 tnxs | -0.03 | -0.01 | -0.02 | 0.08 | 0.38 | 0.12 | 0.25 | 0.06 | 0.15 | -0.01 | 0.03 | 0.01 | -0.00 | 0.13 | 0.02 | 0.32 | 0.07 | 0.07 | -0.01 | 1.00 | 0.00 | 0.00 | 0.73 | 0.72 | 0.27 | -0.00 | 0.00 | 0.00 | -0.00 | -0.00 | -0.00 | 0.19 | 0.26 |
| ERC20 total Ether received | -0.01 | -0.00 | -0.00 | 0.05 | 0.01 | 0.02 | 0.00 | 0.00 | 0.01 | -0.00 | 0.01 | 0.00 | -0.00 | 0.03 | 0.02 | 0.02 | 0.00 | 0.00 | -0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 0.01 | 0.03 | -0.00 | 1.00 | 0.86 | -0.00 | -0.00 | -0.00 | 0.02 | 0.03 |
| ERC20 total ether sent | 0.02 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 0.00 | 0.00 | 0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 0.00 | -0.00 | -0.00 | 0.00 | -0.00 | -0.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 0.00 | -0.00 | 0.00 | 0.00 | 1.00 | 1.00 | 1.00 | 0.00 | 0.00 |
| ERC20 uniq sent addr | -0.03 | -0.01 | -0.01 | 0.04 | 0.36 | 0.04 | 0.15 | 0.05 | 0.12 | -0.01 | 0.00 | 0.00 | -0.00 | 0.05 | -0.01 | 0.25 | 0.01 | 0.00 | -0.01 | 0.73 | 0.00 | 0.00 | 1.00 | 0.57 | 0.15 | -0.00 | 0.00 | 0.00 | -0.00 | -0.00 | -0.00 | 0.11 | 0.14 |
| ERC20 uniq rec addr | -0.03 | 0.00 | -0.01 | 0.08 | 0.30 | 0.14 | 0.19 | 0.08 | 0.18 | -0.00 | 0.04 | 0.04 | -0.00 | 0.20 | 0.01 | 0.28 | 0.04 | 0.03 | -0.02 | 0.72 | 0.01 | 0.00 | 0.57 | 1.00 | 0.44 | -0.00 | 0.01 | 0.00 | -0.00 | -0.00 | -0.00 | 0.32 | 0.44 |
| ERC20 uniq rec contract addr | -0.06 | 0.05 | -0.01 | 0.33 | 0.22 | 0.20 | 0.03 | 0.15 | 0.24 | -0.00 | 0.18 | 0.20 | -0.00 | 0.17 | 0.05 | 0.27 | 0.09 | 0.07 | -0.02 | 0.27 | 0.03 | 0.00 | 0.15 | 0.44 | 1.00 | -0.01 | 0.03 | 0.02 | -0.00 | -0.00 | -0.00 | 0.79 | 1.00 |
| ERC20 min val rec | 0.00 | 0.00 | -0.01 | -0.01 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.01 | 1.00 | -0.00 | -0.00 | 0.01 | -0.00 | -0.00 | -0.00 | -0.01 |
| ERC20 max val rec | -0.01 | -0.00 | -0.00 | 0.05 | 0.00 | 0.02 | 0.00 | 0.00 | 0.00 | -0.00 | 0.00 | 0.00 | -0.00 | 0.03 | 0.02 | 0.01 | 0.00 | 0.00 | -0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 0.01 | 0.03 | -0.00 | 1.00 | 0.86 | -0.00 | -0.00 | -0.00 | 0.02 | 0.03 |
| ERC20 avg val rec | 0.00 | -0.00 | -0.01 | 0.05 | 0.01 | 0.02 | 0.00 | 0.00 | 0.01 | -0.00 | 0.00 | -0.00 | -0.00 | 0.03 | 0.01 | 0.02 | 0.00 | 0.00 | -0.00 | 0.00 | 0.86 | 0.00 | 0.00 | 0.00 | 0.02 | -0.00 | 0.86 | 1.00 | -0.00 | -0.00 | -0.00 | 0.01 | 0.02 |
| ERC20 min val sent | 0.02 | -0.00 | -0.00 | -0.01 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 1.00 | -0.00 | -0.00 | -0.00 | 0.01 | -0.00 | -0.00 | 1.00 | 1.00 | 1.00 | -0.00 | -0.00 |
| ERC20 max val sent | 0.02 | -0.00 | -0.00 | -0.01 | -0.00 | -0.00 | 0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 1.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 1.00 | 1.00 | 1.00 | 0.00 | -0.00 |
| ERC20 avg val sent | 0.02 | -0.00 | -0.00 | -0.01 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 1.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 1.00 | 1.00 | 1.00 | -0.00 | -0.00 |
| ERC20 uniq sent token name | -0.03 | 0.00 | -0.02 | 0.27 | 0.08 | 0.05 | 0.01 | 0.04 | 0.09 | -0.03 | 0.01 | 0.01 | -0.01 | 0.02 | -0.02 | 0.08 | 0.01 | 0.00 | -0.01 | 0.19 | 0.02 | 0.00 | 0.11 | 0.32 | 0.79 | -0.00 | 0.02 | 0.01 | -0.00 | 0.00 | -0.00 | 1.00 | 0.79 |
| ERC20 uniq rec token name | -0.06 | 0.05 | -0.01 | 0.33 | 0.22 | 0.21 | 0.03 | 0.15 | 0.24 | 0.00 | 0.18 | 0.20 | -0.00 | 0.17 | 0.05 | 0.27 | 0.09 | 0.07 | -0.02 | 0.26 | 0.03 | 0.00 | 0.14 | 0.44 | 1.00 | -0.01 | 0.03 | 0.02 | -0.00 | -0.00 | -0.00 | 0.79 | 1.00 |
num_vs = dataset.select_dtypes(include=['int64', 'float64'])
numerical_dataset = dataset[list(num_vs)]
def get_redundant_pairs(df):
# Get diagonal and lower triangular pairs of correlation matrix
pairs_to_drop = set()
cols = df.columns
for i in range(0, df.shape[1]):
for j in range(0, i+1):
pairs_to_drop.add((cols[i], cols[j]))
return pairs_to_drop
def get_top_abs_correlations(df, n=5):
au_corr = df.corr().abs().unstack()
labels_to_drop = get_redundant_pairs(df)
au_corr = au_corr.drop(labels=labels_to_drop).sort_values(ascending=False)
return au_corr[0:n]
print("Top Absolute Correlations")
print(get_top_abs_correlations(numerical_dataset, 25))
Top Absolute Correlations
ERC20 total Ether received ERC20 max val rec 0.999967
ERC20 max val sent ERC20 avg val sent 0.999952
ERC20_most_sent_token_valid_name ERC20_most_rec_token_valid_name 0.999794
ERC20 min val sent ERC20 avg val sent 0.999785
ERC20 max val sent 0.999729
ERC20 total ether sent ERC20 max val sent 0.999649
ERC20 uniq rec contract addr ERC20 uniq rec token name 0.999643
ERC20 total ether sent ERC20 avg val sent 0.999566
ERC20 min val sent 0.999311
ERC20 total Ether received ERC20 avg val rec 0.859823
ERC20 max val rec ERC20 avg val rec 0.859766
Received Tnx total transactions (including tnx to create contract 0.806393
ERC20 uniq sent token name ERC20 uniq rec token name 0.789226
ERC20 uniq rec contract addr ERC20 uniq sent token name 0.787966
total Ether sent total ether received 0.774965
Sent tnx total transactions (including tnx to create contract 0.731503
Total ERC20 tnxs ERC20 uniq sent addr 0.725989
ERC20 uniq rec addr 0.717329
Sent tnx Unique Sent To Addresses 0.670014
Received Tnx Unique Received From Addresses 0.648655
max value received avg val received 0.622959
min val sent avg val sent 0.594868
ERC20 uniq sent addr ERC20 uniq rec addr 0.566484
max val sent total ether balance 0.564872
Unique Received From Addresses total transactions (including tnx to create contract 0.523848
dtype: float64
perfect_corr_cols = [' ERC20 max val sent', ' ERC20 min val sent', ' ERC20 avg val sent', ' ERC20 max val rec', ' ERC20 uniq rec contract addr']
# Re-check corr. matrix
# Drop cols w perfect corr
num_vs.drop(columns=perfect_corr_cols, axis=1, inplace=True)
correlation_new = num_vs.corr()
# Apply a color gradient, map green background and set 2 decimal places
correlation_styled = correlation_new.style.background_gradient(cmap='Greens').format("{:.2f}")
correlation_styled
| Y | Avg min between sent tnx | Avg min between received tnx | Time Diff between first and last (Mins) | Sent tnx | Received Tnx | Number of Created Contracts | Unique Received From Addresses | Unique Sent To Addresses | min value received | max value received | avg val received | min val sent | max val sent | avg val sent | total transactions (including tnx to create contract | total Ether sent | total ether received | total ether balance | Total ERC20 tnxs | ERC20 total Ether received | ERC20 total ether sent | ERC20 uniq sent addr | ERC20 uniq rec addr | ERC20 min val rec | ERC20 avg val rec | ERC20 uniq sent token name | ERC20 uniq rec token name | ERC20_most_sent_token_valid_name | ERC20_most_rec_token_valid_name | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Y | 1.00 | -0.03 | -0.12 | -0.27 | -0.08 | -0.08 | -0.01 | -0.03 | -0.05 | -0.02 | -0.02 | -0.01 | 0.01 | -0.02 | -0.06 | -0.10 | -0.01 | -0.02 | -0.00 | -0.03 | -0.01 | 0.02 | -0.03 | -0.03 | 0.00 | 0.00 | -0.03 | -0.06 | 0.48 | 0.48 |
| Avg min between sent tnx | -0.03 | 1.00 | 0.06 | 0.21 | -0.03 | -0.04 | -0.01 | -0.02 | -0.02 | -0.01 | -0.01 | -0.00 | -0.00 | -0.01 | 0.00 | -0.04 | -0.01 | -0.01 | -0.00 | -0.01 | -0.00 | -0.00 | -0.01 | 0.00 | 0.00 | -0.00 | 0.00 | 0.05 | 0.13 | 0.13 |
| Avg min between received tnx | -0.12 | 0.06 | 1.00 | 0.30 | -0.04 | -0.05 | -0.01 | -0.03 | -0.03 | -0.05 | -0.01 | -0.01 | -0.01 | -0.01 | -0.04 | -0.06 | -0.01 | -0.01 | -0.00 | -0.02 | -0.00 | -0.00 | -0.01 | -0.01 | -0.01 | -0.01 | -0.02 | -0.01 | -0.02 | -0.02 |
| Time Diff between first and last (Mins) | -0.27 | 0.21 | 0.30 | 1.00 | 0.15 | 0.15 | -0.00 | 0.04 | 0.07 | -0.08 | -0.00 | -0.01 | -0.01 | 0.01 | -0.05 | 0.19 | 0.01 | 0.01 | 0.00 | 0.08 | 0.05 | -0.00 | 0.04 | 0.08 | -0.01 | 0.05 | 0.27 | 0.33 | 0.32 | 0.32 |
| Sent tnx | -0.08 | -0.03 | -0.04 | 0.15 | 1.00 | 0.20 | 0.32 | 0.13 | 0.67 | 0.02 | 0.10 | 0.14 | -0.00 | 0.23 | 0.03 | 0.73 | 0.24 | 0.16 | -0.13 | 0.38 | 0.01 | -0.00 | 0.36 | 0.30 | -0.00 | 0.01 | 0.08 | 0.22 | 0.08 | 0.08 |
| Received Tnx | -0.08 | -0.04 | -0.05 | 0.15 | 0.20 | 1.00 | -0.00 | 0.65 | 0.16 | -0.02 | 0.22 | -0.00 | 0.09 | 0.10 | 0.13 | 0.81 | 0.13 | 0.24 | 0.16 | 0.12 | 0.02 | -0.00 | 0.04 | 0.14 | -0.00 | 0.02 | 0.05 | 0.21 | 0.11 | 0.11 |
| Number of Created Contracts | -0.01 | -0.01 | -0.01 | -0.00 | 0.32 | -0.00 | 1.00 | -0.00 | 0.08 | -0.00 | -0.00 | -0.00 | -0.00 | 0.14 | -0.00 | 0.28 | 0.02 | -0.00 | -0.04 | 0.25 | 0.00 | 0.00 | 0.15 | 0.19 | -0.00 | 0.00 | 0.01 | 0.03 | 0.02 | 0.02 |
| Unique Received From Addresses | -0.03 | -0.02 | -0.03 | 0.04 | 0.13 | 0.65 | -0.00 | 1.00 | 0.16 | -0.01 | 0.18 | -0.00 | 0.30 | 0.06 | 0.23 | 0.52 | 0.03 | 0.12 | 0.14 | 0.06 | 0.00 | 0.00 | 0.05 | 0.08 | -0.00 | 0.00 | 0.04 | 0.15 | 0.06 | 0.06 |
| Unique Sent To Addresses | -0.05 | -0.02 | -0.03 | 0.07 | 0.67 | 0.16 | 0.08 | 0.16 | 1.00 | 0.07 | 0.15 | 0.21 | -0.00 | 0.20 | 0.02 | 0.50 | 0.16 | 0.09 | -0.11 | 0.15 | 0.01 | 0.00 | 0.12 | 0.18 | -0.00 | 0.01 | 0.09 | 0.24 | 0.06 | 0.06 |
| min value received | -0.02 | -0.01 | -0.05 | -0.08 | 0.02 | -0.02 | -0.00 | -0.01 | 0.07 | 1.00 | 0.03 | 0.12 | 0.12 | 0.02 | 0.27 | -0.00 | -0.00 | -0.00 | -0.00 | -0.01 | -0.00 | -0.00 | -0.01 | -0.00 | -0.00 | -0.00 | -0.03 | 0.00 | -0.10 | -0.10 |
| max value received | -0.02 | -0.01 | -0.01 | -0.00 | 0.10 | 0.22 | -0.00 | 0.18 | 0.15 | 0.03 | 1.00 | 0.62 | 0.00 | 0.14 | 0.04 | 0.21 | 0.11 | 0.30 | 0.28 | 0.03 | 0.01 | -0.00 | 0.00 | 0.04 | -0.00 | 0.00 | 0.01 | 0.18 | 0.02 | 0.02 |
| avg val received | -0.01 | -0.00 | -0.01 | -0.01 | 0.14 | -0.00 | -0.00 | -0.00 | 0.21 | 0.12 | 0.62 | 1.00 | 0.01 | 0.13 | 0.07 | 0.08 | 0.16 | 0.06 | -0.14 | 0.01 | 0.00 | -0.00 | 0.00 | 0.04 | -0.00 | -0.00 | 0.01 | 0.20 | -0.01 | -0.01 |
| min val sent | 0.01 | -0.00 | -0.01 | -0.01 | -0.00 | 0.09 | -0.00 | 0.30 | -0.00 | 0.12 | 0.00 | 0.01 | 1.00 | 0.02 | 0.59 | 0.06 | 0.00 | 0.00 | 0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.01 | -0.00 | 0.01 | 0.01 |
| max val sent | -0.02 | -0.01 | -0.01 | 0.01 | 0.23 | 0.10 | 0.14 | 0.06 | 0.20 | 0.02 | 0.14 | 0.13 | 0.02 | 1.00 | 0.18 | 0.21 | 0.47 | 0.09 | -0.56 | 0.13 | 0.03 | 0.00 | 0.05 | 0.20 | -0.00 | 0.03 | 0.02 | 0.17 | -0.00 | -0.00 |
| avg val sent | -0.06 | 0.00 | -0.04 | -0.05 | 0.03 | 0.13 | -0.00 | 0.23 | 0.02 | 0.27 | 0.04 | 0.07 | 0.59 | 0.18 | 1.00 | 0.10 | 0.20 | 0.17 | -0.05 | 0.02 | 0.02 | -0.00 | -0.01 | 0.01 | -0.00 | 0.01 | -0.02 | 0.05 | -0.11 | -0.11 |
| total transactions (including tnx to create contract | -0.10 | -0.04 | -0.06 | 0.19 | 0.73 | 0.81 | 0.28 | 0.52 | 0.50 | -0.00 | 0.21 | 0.08 | 0.06 | 0.21 | 0.10 | 1.00 | 0.23 | 0.25 | 0.03 | 0.32 | 0.02 | -0.00 | 0.25 | 0.28 | -0.00 | 0.02 | 0.08 | 0.27 | 0.12 | 0.12 |
| total Ether sent | -0.01 | -0.01 | -0.01 | 0.01 | 0.24 | 0.13 | 0.02 | 0.03 | 0.16 | -0.00 | 0.11 | 0.16 | 0.00 | 0.47 | 0.20 | 0.23 | 1.00 | 0.77 | -0.31 | 0.07 | 0.00 | 0.00 | 0.01 | 0.04 | -0.00 | 0.00 | 0.01 | 0.09 | 0.01 | 0.01 |
| total ether received | -0.02 | -0.01 | -0.01 | 0.01 | 0.16 | 0.24 | -0.00 | 0.12 | 0.09 | -0.00 | 0.30 | 0.06 | 0.00 | 0.09 | 0.17 | 0.25 | 0.77 | 1.00 | 0.36 | 0.07 | 0.00 | -0.00 | 0.00 | 0.03 | -0.00 | 0.00 | 0.00 | 0.07 | 0.02 | 0.02 |
| total ether balance | -0.00 | -0.00 | -0.00 | 0.00 | -0.13 | 0.16 | -0.04 | 0.14 | -0.11 | -0.00 | 0.28 | -0.14 | 0.00 | -0.56 | -0.05 | 0.03 | -0.31 | 0.36 | 1.00 | -0.01 | -0.00 | -0.00 | -0.01 | -0.02 | -0.00 | -0.00 | -0.01 | -0.02 | 0.02 | 0.02 |
| Total ERC20 tnxs | -0.03 | -0.01 | -0.02 | 0.08 | 0.38 | 0.12 | 0.25 | 0.06 | 0.15 | -0.01 | 0.03 | 0.01 | -0.00 | 0.13 | 0.02 | 0.32 | 0.07 | 0.07 | -0.01 | 1.00 | 0.00 | 0.00 | 0.73 | 0.72 | -0.00 | 0.00 | 0.19 | 0.26 | 0.07 | 0.07 |
| ERC20 total Ether received | -0.01 | -0.00 | -0.00 | 0.05 | 0.01 | 0.02 | 0.00 | 0.00 | 0.01 | -0.00 | 0.01 | 0.00 | -0.00 | 0.03 | 0.02 | 0.02 | 0.00 | 0.00 | -0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 0.01 | -0.00 | 0.86 | 0.02 | 0.03 | 0.01 | 0.01 |
| ERC20 total ether sent | 0.02 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 0.00 | 0.00 | 0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 0.00 | -0.00 | -0.00 | 0.00 | -0.00 | -0.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | -0.00 | 0.00 | 0.00 | 0.00 | -0.01 | 0.01 |
| ERC20 uniq sent addr | -0.03 | -0.01 | -0.01 | 0.04 | 0.36 | 0.04 | 0.15 | 0.05 | 0.12 | -0.01 | 0.00 | 0.00 | -0.00 | 0.05 | -0.01 | 0.25 | 0.01 | 0.00 | -0.01 | 0.73 | 0.00 | 0.00 | 1.00 | 0.57 | -0.00 | 0.00 | 0.11 | 0.14 | 0.05 | 0.05 |
| ERC20 uniq rec addr | -0.03 | 0.00 | -0.01 | 0.08 | 0.30 | 0.14 | 0.19 | 0.08 | 0.18 | -0.00 | 0.04 | 0.04 | -0.00 | 0.20 | 0.01 | 0.28 | 0.04 | 0.03 | -0.02 | 0.72 | 0.01 | 0.00 | 0.57 | 1.00 | -0.00 | 0.00 | 0.32 | 0.44 | 0.08 | 0.08 |
| ERC20 min val rec | 0.00 | 0.00 | -0.01 | -0.01 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 1.00 | -0.00 | -0.00 | -0.01 | 0.02 | 0.02 |
| ERC20 avg val rec | 0.00 | -0.00 | -0.01 | 0.05 | 0.01 | 0.02 | 0.00 | 0.00 | 0.01 | -0.00 | 0.00 | -0.00 | -0.00 | 0.03 | 0.01 | 0.02 | 0.00 | 0.00 | -0.00 | 0.00 | 0.86 | 0.00 | 0.00 | 0.00 | -0.00 | 1.00 | 0.01 | 0.02 | 0.02 | 0.02 |
| ERC20 uniq sent token name | -0.03 | 0.00 | -0.02 | 0.27 | 0.08 | 0.05 | 0.01 | 0.04 | 0.09 | -0.03 | 0.01 | 0.01 | -0.01 | 0.02 | -0.02 | 0.08 | 0.01 | 0.00 | -0.01 | 0.19 | 0.02 | 0.00 | 0.11 | 0.32 | -0.00 | 0.01 | 1.00 | 0.79 | 0.18 | 0.18 |
| ERC20 uniq rec token name | -0.06 | 0.05 | -0.01 | 0.33 | 0.22 | 0.21 | 0.03 | 0.15 | 0.24 | 0.00 | 0.18 | 0.20 | -0.00 | 0.17 | 0.05 | 0.27 | 0.09 | 0.07 | -0.02 | 0.26 | 0.03 | 0.00 | 0.14 | 0.44 | -0.01 | 0.02 | 0.79 | 1.00 | 0.25 | 0.25 |
| ERC20_most_sent_token_valid_name | 0.48 | 0.13 | -0.02 | 0.32 | 0.08 | 0.11 | 0.02 | 0.06 | 0.06 | -0.10 | 0.02 | -0.01 | 0.01 | -0.00 | -0.11 | 0.12 | 0.01 | 0.02 | 0.02 | 0.07 | 0.01 | -0.01 | 0.05 | 0.08 | 0.02 | 0.02 | 0.18 | 0.25 | 1.00 | 1.00 |
| ERC20_most_rec_token_valid_name | 0.48 | 0.13 | -0.02 | 0.32 | 0.08 | 0.11 | 0.02 | 0.06 | 0.06 | -0.10 | 0.02 | -0.01 | 0.01 | -0.00 | -0.11 | 0.12 | 0.01 | 0.02 | 0.02 | 0.07 | 0.01 | 0.01 | 0.05 | 0.08 | 0.02 | 0.02 | 0.18 | 0.25 | 1.00 | 1.00 |
From above, we see some variables that are highly correlated, let's investigate those pair of variables and decide whether to drop or not:
- total transactions (including tnx to create contract, Sent Tnx: 0.73
- total transations is sum of sent, received (normal and contract). Drop total transactions.
- total transactions (including tnx to create contract, Received Tnx: 0.81
- same rationale as above. Drop total transactions.
- Unique Received From Addresses, Received Tnx: 0.65
- in this dataset where addresses are close to unique (25 duplicates only, seen from above), almost every received transaction will be from a unique address. Unique Received From Addresses gives close to similar information to total Received Tnx. Drop Unique Received From Addresses.
- Unique Sent To Addresses, Sent Tnx: 0.67
- same rationale as above, but for 'sent' case. Drop Unique Sent To Addresses.
- ERC20 uniq sent addr, Total ERC20 tnxs: 0.73
- same rationale as above, but for 'ERC20' case. Drop Total ERC20 tnxs.
- ERC20 uniq rec addr, Total ERC20 tnxs: 0.72
- same rationale as above, but for 'ERC20' case. Drop Total ERC20 tnxs.
- ERC20 avg val rec, ERC20 total Ether received: 0.86
- simple mathematical derivations of one another, information may be redundant. Drop ERC20 avg val rec.
- avg val received, max val received: 0.62
- simple mathematical derivations of one another, information may be redundant. Drop avg val received.
- total ether received, total Ether sent: 0.77
- implies that an account sends and receives token relatively evenly, no reason to remove. Keep for now.
- ERC20 uniq rec token name, ERC20 uniq sent token name: 0.79
- implies that an account sends and receives token relatively evenly, no reason to remove. Keep for now.
In summary, following columns are to be dropped:
- total transactions (including tnx to create contract
- Unique Received From Addresses
- Unique Sent To Addresses
- Total ERC20 tnxs
- ERC20 avg val rec
- avg val received
highly_corr_cols = [ 'total transactions (including tnx to create contract', 'Unique Received From Addresses', 'Unique Sent To Addresses', ' Total ERC20 tnxs', ' ERC20 avg val rec', 'avg val received']
# Re-check corr. matrix
# Drop cols w perfect corr
num_vs.drop(columns=highly_corr_cols, axis=1, inplace=True)
correlation_new1 = num_vs.corr()
# Apply a color gradient, map green background and set 2 decimal places
correlation_styled = correlation_new1.style.background_gradient(cmap='Greens').format("{:.2f}")
correlation_styled
| Y | Avg min between sent tnx | Avg min between received tnx | Time Diff between first and last (Mins) | Sent tnx | Received Tnx | Number of Created Contracts | min value received | max value received | min val sent | max val sent | avg val sent | total Ether sent | total ether received | total ether balance | ERC20 total Ether received | ERC20 total ether sent | ERC20 uniq sent addr | ERC20 uniq rec addr | ERC20 min val rec | ERC20 uniq sent token name | ERC20 uniq rec token name | ERC20_most_sent_token_valid_name | ERC20_most_rec_token_valid_name | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Y | 1.00 | -0.03 | -0.12 | -0.27 | -0.08 | -0.08 | -0.01 | -0.02 | -0.02 | 0.01 | -0.02 | -0.06 | -0.01 | -0.02 | -0.00 | -0.01 | 0.02 | -0.03 | -0.03 | 0.00 | -0.03 | -0.06 | 0.48 | 0.48 |
| Avg min between sent tnx | -0.03 | 1.00 | 0.06 | 0.21 | -0.03 | -0.04 | -0.01 | -0.01 | -0.01 | -0.00 | -0.01 | 0.00 | -0.01 | -0.01 | -0.00 | -0.00 | -0.00 | -0.01 | 0.00 | 0.00 | 0.00 | 0.05 | 0.13 | 0.13 |
| Avg min between received tnx | -0.12 | 0.06 | 1.00 | 0.30 | -0.04 | -0.05 | -0.01 | -0.05 | -0.01 | -0.01 | -0.01 | -0.04 | -0.01 | -0.01 | -0.00 | -0.00 | -0.00 | -0.01 | -0.01 | -0.01 | -0.02 | -0.01 | -0.02 | -0.02 |
| Time Diff between first and last (Mins) | -0.27 | 0.21 | 0.30 | 1.00 | 0.15 | 0.15 | -0.00 | -0.08 | -0.00 | -0.01 | 0.01 | -0.05 | 0.01 | 0.01 | 0.00 | 0.05 | -0.00 | 0.04 | 0.08 | -0.01 | 0.27 | 0.33 | 0.32 | 0.32 |
| Sent tnx | -0.08 | -0.03 | -0.04 | 0.15 | 1.00 | 0.20 | 0.32 | 0.02 | 0.10 | -0.00 | 0.23 | 0.03 | 0.24 | 0.16 | -0.13 | 0.01 | -0.00 | 0.36 | 0.30 | -0.00 | 0.08 | 0.22 | 0.08 | 0.08 |
| Received Tnx | -0.08 | -0.04 | -0.05 | 0.15 | 0.20 | 1.00 | -0.00 | -0.02 | 0.22 | 0.09 | 0.10 | 0.13 | 0.13 | 0.24 | 0.16 | 0.02 | -0.00 | 0.04 | 0.14 | -0.00 | 0.05 | 0.21 | 0.11 | 0.11 |
| Number of Created Contracts | -0.01 | -0.01 | -0.01 | -0.00 | 0.32 | -0.00 | 1.00 | -0.00 | -0.00 | -0.00 | 0.14 | -0.00 | 0.02 | -0.00 | -0.04 | 0.00 | 0.00 | 0.15 | 0.19 | -0.00 | 0.01 | 0.03 | 0.02 | 0.02 |
| min value received | -0.02 | -0.01 | -0.05 | -0.08 | 0.02 | -0.02 | -0.00 | 1.00 | 0.03 | 0.12 | 0.02 | 0.27 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.01 | -0.00 | -0.00 | -0.03 | 0.00 | -0.10 | -0.10 |
| max value received | -0.02 | -0.01 | -0.01 | -0.00 | 0.10 | 0.22 | -0.00 | 0.03 | 1.00 | 0.00 | 0.14 | 0.04 | 0.11 | 0.30 | 0.28 | 0.01 | -0.00 | 0.00 | 0.04 | -0.00 | 0.01 | 0.18 | 0.02 | 0.02 |
| min val sent | 0.01 | -0.00 | -0.01 | -0.01 | -0.00 | 0.09 | -0.00 | 0.12 | 0.00 | 1.00 | 0.02 | 0.59 | 0.00 | 0.00 | 0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.01 | -0.00 | 0.01 | 0.01 |
| max val sent | -0.02 | -0.01 | -0.01 | 0.01 | 0.23 | 0.10 | 0.14 | 0.02 | 0.14 | 0.02 | 1.00 | 0.18 | 0.47 | 0.09 | -0.56 | 0.03 | 0.00 | 0.05 | 0.20 | -0.00 | 0.02 | 0.17 | -0.00 | -0.00 |
| avg val sent | -0.06 | 0.00 | -0.04 | -0.05 | 0.03 | 0.13 | -0.00 | 0.27 | 0.04 | 0.59 | 0.18 | 1.00 | 0.20 | 0.17 | -0.05 | 0.02 | -0.00 | -0.01 | 0.01 | -0.00 | -0.02 | 0.05 | -0.11 | -0.11 |
| total Ether sent | -0.01 | -0.01 | -0.01 | 0.01 | 0.24 | 0.13 | 0.02 | -0.00 | 0.11 | 0.00 | 0.47 | 0.20 | 1.00 | 0.77 | -0.31 | 0.00 | 0.00 | 0.01 | 0.04 | -0.00 | 0.01 | 0.09 | 0.01 | 0.01 |
| total ether received | -0.02 | -0.01 | -0.01 | 0.01 | 0.16 | 0.24 | -0.00 | -0.00 | 0.30 | 0.00 | 0.09 | 0.17 | 0.77 | 1.00 | 0.36 | 0.00 | -0.00 | 0.00 | 0.03 | -0.00 | 0.00 | 0.07 | 0.02 | 0.02 |
| total ether balance | -0.00 | -0.00 | -0.00 | 0.00 | -0.13 | 0.16 | -0.04 | -0.00 | 0.28 | 0.00 | -0.56 | -0.05 | -0.31 | 0.36 | 1.00 | -0.00 | -0.00 | -0.01 | -0.02 | -0.00 | -0.01 | -0.02 | 0.02 | 0.02 |
| ERC20 total Ether received | -0.01 | -0.00 | -0.00 | 0.05 | 0.01 | 0.02 | 0.00 | -0.00 | 0.01 | -0.00 | 0.03 | 0.02 | 0.00 | 0.00 | -0.00 | 1.00 | 0.00 | 0.00 | 0.01 | -0.00 | 0.02 | 0.03 | 0.01 | 0.01 |
| ERC20 total ether sent | 0.02 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 0.00 | -0.00 | -0.00 | -0.00 | 0.00 | -0.00 | 0.00 | -0.00 | -0.00 | 0.00 | 1.00 | 0.00 | 0.00 | -0.00 | 0.00 | 0.00 | -0.01 | 0.01 |
| ERC20 uniq sent addr | -0.03 | -0.01 | -0.01 | 0.04 | 0.36 | 0.04 | 0.15 | -0.01 | 0.00 | -0.00 | 0.05 | -0.01 | 0.01 | 0.00 | -0.01 | 0.00 | 0.00 | 1.00 | 0.57 | -0.00 | 0.11 | 0.14 | 0.05 | 0.05 |
| ERC20 uniq rec addr | -0.03 | 0.00 | -0.01 | 0.08 | 0.30 | 0.14 | 0.19 | -0.00 | 0.04 | -0.00 | 0.20 | 0.01 | 0.04 | 0.03 | -0.02 | 0.01 | 0.00 | 0.57 | 1.00 | -0.00 | 0.32 | 0.44 | 0.08 | 0.08 |
| ERC20 min val rec | 0.00 | 0.00 | -0.01 | -0.01 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 1.00 | -0.00 | -0.01 | 0.02 | 0.02 |
| ERC20 uniq sent token name | -0.03 | 0.00 | -0.02 | 0.27 | 0.08 | 0.05 | 0.01 | -0.03 | 0.01 | -0.01 | 0.02 | -0.02 | 0.01 | 0.00 | -0.01 | 0.02 | 0.00 | 0.11 | 0.32 | -0.00 | 1.00 | 0.79 | 0.18 | 0.18 |
| ERC20 uniq rec token name | -0.06 | 0.05 | -0.01 | 0.33 | 0.22 | 0.21 | 0.03 | 0.00 | 0.18 | -0.00 | 0.17 | 0.05 | 0.09 | 0.07 | -0.02 | 0.03 | 0.00 | 0.14 | 0.44 | -0.01 | 0.79 | 1.00 | 0.25 | 0.25 |
| ERC20_most_sent_token_valid_name | 0.48 | 0.13 | -0.02 | 0.32 | 0.08 | 0.11 | 0.02 | -0.10 | 0.02 | 0.01 | -0.00 | -0.11 | 0.01 | 0.02 | 0.02 | 0.01 | -0.01 | 0.05 | 0.08 | 0.02 | 0.18 | 0.25 | 1.00 | 1.00 |
| ERC20_most_rec_token_valid_name | 0.48 | 0.13 | -0.02 | 0.32 | 0.08 | 0.11 | 0.02 | -0.10 | 0.02 | 0.01 | -0.00 | -0.11 | 0.01 | 0.02 | 0.02 | 0.01 | 0.01 | 0.05 | 0.08 | 0.02 | 0.18 | 0.25 | 1.00 | 1.00 |
# Corr. Matrix looks ok now, apply column deletion on actual dataset
drop_columns_corr = perfect_corr_cols + highly_corr_cols
dataset.drop(columns=drop_columns_corr, axis=1, inplace=True)
Distribution of Features Partitioned by Target Variable¶
dataset.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 9841 entries, 0 to 9840 Data columns (total 24 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Y 9841 non-null int64 1 Avg min between sent tnx 9841 non-null float64 2 Avg min between received tnx 9841 non-null float64 3 Time Diff between first and last (Mins) 9841 non-null float64 4 Sent tnx 9841 non-null int64 5 Received Tnx 9841 non-null int64 6 Number of Created Contracts 9841 non-null int64 7 min value received 9841 non-null float64 8 max value received 9841 non-null float64 9 min val sent 9841 non-null float64 10 max val sent 9841 non-null float64 11 avg val sent 9841 non-null float64 12 total Ether sent 9841 non-null float64 13 total ether received 9841 non-null float64 14 total ether balance 9841 non-null float64 15 ERC20 total Ether received 9841 non-null float64 16 ERC20 total ether sent 9841 non-null float64 17 ERC20 uniq sent addr 9841 non-null float64 18 ERC20 uniq rec addr 9841 non-null float64 19 ERC20 min val rec 9841 non-null float64 20 ERC20 uniq sent token name 9841 non-null float64 21 ERC20 uniq rec token name 9841 non-null float64 22 ERC20_most_sent_token_valid_name 9841 non-null int64 23 ERC20_most_rec_token_valid_name 9841 non-null int64 dtypes: float64(18), int64(6) memory usage: 1.8 MB
Next, we plot the distribution of each variable across the Y variable, and see if there are any distributions we can compare.
numeric_columns = dataset.dtypes[dataset.dtypes != 'object'].index
numeric_columns = list(numeric_columns)
numeric_columns.remove('Y')
# Determine the number of rows and columns for subplots
num_rows = (len(numeric_columns) + 3) // 4
num_cols = 4
# Creating box plots for each numeric column based on 'Response'
plt.figure(figsize=(15, 5*num_rows))
for idx, column in enumerate(numeric_columns, start=1):
plt.subplot(num_rows, num_cols, idx)
sns.boxplot(data=dataset, x='Y', y=column)
plt.title(column)
plt.tight_layout()
plt.show()
We see that the time difference between the first and last transaction shows significant variability across both fraudulent and non-fraudulent transactions. Let's compare that with variables that might correlate with it logically.
Received_Tnx: Fraudulent transactions might receive less transactions (if the user decides to transfer a big amount of ERC20 at once for example), or they might receive more transactions (if the user spreads out value among more transactions to reduce suspicion)
Sent Tnx: same argument as above.
avg val received: Fraudulent transactions might receive huge amounts / small amounts on average, similar to the argument above.
avg_val_sent: Fraudulent transactions might send huge amounts / small amounts on average, similar to the argument above.
ERC20 uniq sent addr: Fraudulent transactions might send all tokens to a single account.
ERC20 uniq rec addr: Fraudulent transactions might receive all tokens in a single account.
features_to_plot = ["Time Diff between first and last (Mins)", "avg val sent", "Sent tnx", "Received Tnx", " ERC20 uniq sent addr", " ERC20 uniq rec addr", "Y"]
# Print a title for the plot.
print("Relative Plot of Selected Features: A Data Subset")
# Create a pair plot.
sns.pairplot(dataset[features_to_plot], hue="Y")
# Display the plot.
plt.show()
Relative Plot of Selected Features: A Data Subset
We can observe that the spread of fradulent transactions for each column, over time difference between first and last transaction is heavily biased towards shorter time differences. Furthermore, These fraudulent transactions have very little unique sent and receiving addresses, low amounts of received and sent transactions, and low values of average value received and sent.
This could mean that frauders create throwaway accounts (perhaps automated) that send or receive tokens from specific addresses, and in low amounts to prevent suspicion of what they were doing. This would explain the low time diff between first and last transactions, as they probably created more throwaway accounts after they're done with one to reduce tracability.
5. Feature Engineering¶
Feature Creation¶
New Feature 1: Ratio of number of sent transactions to number of received transactions
This feature can highlight transactions with atypical sending and receiving patterns, which are often indicative of fraud. These atypical patterns can be suggestive of fraudulent activities, such as individuals or entities attempting to accumulate assets without making proportional outgoing transactions.
# Calculate the Sent-to-Received Ratio, replacing infinity with NaN
dataset['Sent_to_Received_Ratio'] = dataset['Sent tnx'] / dataset['Received Tnx']
dataset['Sent_to_Received_Ratio'].fillna(1, inplace=True)
dataset['Sent_to_Received_Ratio'].replace([np.inf, -np.inf], 1, inplace=True)
# Boxplot with outliers
sns.boxplot(data=dataset, x='Y', y='Sent_to_Received_Ratio')
<Axes: xlabel='Y', ylabel='Sent_to_Received_Ratio'>
From the boxplot, we can see that there is a wider spread of values for non-fraudulent transactions and they exhibit a more diverse range of behaviour when it comes to the ratio of sent to received transactions.
On the other hand, the ratio of sent to received transactions for fraudulent transactions are mostly concentrated around the lower end of the ratio scale, which indicates that fraudulent transactions often involve fewer received transactions compared to sent transactions.
# Boxplot without outliers
sns.boxplot(data=dataset, x='Y', y='Sent_to_Received_Ratio', showfliers = False)
<Axes: xlabel='Y', ylabel='Sent_to_Received_Ratio'>
The boxplot analysis reveals that for non-fraudulent transactions, there is significant variability in the Sent_to_Received ratio. This suggests that non-fraudulent transactions exhibit a wide range of sending and receiving behaviors, with many transactions having relatively high ratios. In contrast, fraudulent transactions tend to have lower ratios on average, indicating that they often involve sending less relative to receiving.
Moreover, while there is variability in the Sent_to_Received ratio for fraudulent transactions, a substantial proportion of them have lower ratios, meaning they tend to send less compared to what they receive. This pattern aligns with the intuition that fraudulent activities often involve receiving assets without sending a significant amount in return, such as in Ponzi schemes or phishing attacks.
New Feature 2: Ratio of Total Ether Sent to Total Ether Received
Similiar to the previous feature, we created this feature to identify suspicious transactions based on the value of Ether sent to value of Ether reived.
# Calculate the Ether_Sent_to_Received_Ratio Ratio, same logic as sent-to-receive ratio above, impute those dividing by 0 by 1
dataset['Ether_Sent_to_Received_Ratio'] = dataset['total Ether sent'] / dataset['total ether received']
dataset['Ether_Sent_to_Received_Ratio'].fillna(1, inplace=True)
dataset['Ether_Sent_to_Received_Ratio'].replace([np.inf, -np.inf], 1, inplace=True)
# Box plot with outliers
sns.boxplot(data=dataset, x='Y', y='Ether_Sent_to_Received_Ratio')
<Axes: xlabel='Y', ylabel='Ether_Sent_to_Received_Ratio'>
From this boxplot, the distribution of the ratio looks pretty similiar for both fraudulent and non-fraudulent activities if we exclude the one outlier for non-fraud.
# Boxplot without outliers
sns.boxplot(data=dataset, x='Y', y='Ether_Sent_to_Received_Ratio', showfliers = False)
<Axes: xlabel='Y', ylabel='Ether_Sent_to_Received_Ratio'>
Both non-fraudulent and fraudulent transactions have ratios with values clustered around 1. This suggests that, on average, both types of transactions maintain a balance between the Ether sent and received.
However, for fraudulent transactions, the ratios are concentrated in an even narrower range, with the minimum and maximum values extremely close to 1. The IQR is also very small and extremely close to 1. This indicates a strong tendency for non-legitimate transactions to maintain near-perfect balance between Ether sent and received.
A possible reason for this could be that by maintaining a ratio close to 1, fraudsters reduce the likelihood of their transactions being flagged as suspicious or unusual by anti-fraud algorithms and financial institutions.
We have decided the drop the original columns for two main reasons:
- Data redundancy - the newly created columns already contain information about the 4 columns
- Reduced dimensionality - the number of features currently are already relateively high (close to 20) and dropping the columns will help in reducing the dimensionality of the feature variables
# Drop the original columns
dataset.drop(columns = ['total Ether sent', 'total ether received', 'Sent tnx', 'Received Tnx'], inplace=True, axis=1)
# Check that the columns are dropped correctly
dataset.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 9841 entries, 0 to 9840 Data columns (total 22 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Y 9841 non-null int64 1 Avg min between sent tnx 9841 non-null float64 2 Avg min between received tnx 9841 non-null float64 3 Time Diff between first and last (Mins) 9841 non-null float64 4 Number of Created Contracts 9841 non-null int64 5 min value received 9841 non-null float64 6 max value received 9841 non-null float64 7 min val sent 9841 non-null float64 8 max val sent 9841 non-null float64 9 avg val sent 9841 non-null float64 10 total ether balance 9841 non-null float64 11 ERC20 total Ether received 9841 non-null float64 12 ERC20 total ether sent 9841 non-null float64 13 ERC20 uniq sent addr 9841 non-null float64 14 ERC20 uniq rec addr 9841 non-null float64 15 ERC20 min val rec 9841 non-null float64 16 ERC20 uniq sent token name 9841 non-null float64 17 ERC20 uniq rec token name 9841 non-null float64 18 ERC20_most_sent_token_valid_name 9841 non-null int64 19 ERC20_most_rec_token_valid_name 9841 non-null int64 20 Sent_to_Received_Ratio 9841 non-null float64 21 Ether_Sent_to_Received_Ratio 9841 non-null float64 dtypes: float64(18), int64(4) memory usage: 1.7 MB
6. Data Pre-processing¶
Global Preprocessing: Handling Skewed Data¶
numeric_columns = dataset.dtypes[dataset.dtypes != 'object'].index
numeric_columns = list(numeric_columns)
numeric_columns.remove('Y')
# Determine the number of rows and columns for subplots
num_rows = (len(numeric_columns) + 3) // 4
num_cols = 4
# Creating box plots for each numeric column based on 'Response'
plt.figure(figsize=(20, 5*num_rows))
for idx, column in enumerate(numeric_columns, start=1):
plt.subplot(num_rows, num_cols, idx)
plt.hist(dataset[column])
plt.title(column + " skewness: " +str(round(stats.skew(dataset[column]),2)))
plt.tight_layout()
plt.show()
# Applying log transformation on the variables
numeric_columns = dataset.dtypes[dataset.dtypes != 'object'].index
numeric_columns = list(numeric_columns)
numeric_columns.remove('Y')
numeric_columns.remove('ERC20_most_sent_token_valid_name')
numeric_columns.remove('ERC20_most_rec_token_valid_name')
# Remove total ether balance because variable is already somewhat normal
numeric_columns.remove('total ether balance')
# Determine the number of rows and columns for subplots
num_rows = (len(numeric_columns) + 3) // 4
num_cols = 4
# Creating box plots for each numeric column based on 'Response'
plt.figure(figsize=(20, 5*num_rows))
for idx, column in enumerate(numeric_columns, start=1):
plt.subplot(num_rows, num_cols, idx)
log_transformed = np.log1p(dataset[column])
plt.hist(log_transformed)
plt.title("log " + column + " skewness: " +str(round(stats.skew(log_transformed),2)))
plt.tight_layout()
plt.show()
dataset_log = dataset.copy(deep = True)
numeric_columns
['Avg min between sent tnx', 'Avg min between received tnx', 'Time Diff between first and last (Mins)', 'Number of Created Contracts', 'min value received', 'max value received ', 'min val sent', 'max val sent', 'avg val sent', ' ERC20 total Ether received', ' ERC20 total ether sent', ' ERC20 uniq sent addr', ' ERC20 uniq rec addr', ' ERC20 min val rec', ' ERC20 uniq sent token name', ' ERC20 uniq rec token name', 'Sent_to_Received_Ratio', 'Ether_Sent_to_Received_Ratio']
for x in numeric_columns:
dataset_log[x] = np.log1p(dataset_log[x])
dataset_log
| Y | Avg min between sent tnx | Avg min between received tnx | Time Diff between first and last (Mins) | Number of Created Contracts | min value received | max value received | min val sent | max val sent | avg val sent | ... | ERC20 total ether sent | ERC20 uniq sent addr | ERC20 uniq rec addr | ERC20 min val rec | ERC20 uniq sent token name | ERC20 uniq rec token name | ERC20_most_sent_token_valid_name | ERC20_most_rec_token_valid_name | Sent_to_Received_Ratio | Ether_Sent_to_Received_Ratio | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 0 | 6.739644 | 6.998245 | 13.465650 | 0.000000 | 0.000000 | 3.846028 | 0.000000 | 3.472587 | 0.788767 | ... | 17.387945 | 3.433987 | 4.007333 | 0.000000 | 3.688879 | 4.060443 | 1 | 1 | 2.208398 | 0.906690 |
| 1 | 0 | 9.450150 | 7.992755 | 14.012899 | 0.000000 | 0.000000 | 1.284613 | 0.000000 | 1.029619 | 0.032316 | ... | 1.181975 | 0.693147 | 1.791759 | 0.000000 | 0.693147 | 2.079442 | 1 | 1 | 2.545531 | 0.693442 |
| 2 | 0 | 12.413881 | 7.797710 | 13.155276 | 0.000000 | 0.107166 | 0.772630 | 0.048790 | 1.512622 | 1.027584 | ... | 0.000000 | 0.000000 | 2.079442 | 0.000000 | 0.000000 | 2.197225 | 1 | 1 | 0.182322 | 0.693086 |
| 3 | 0 | 9.232161 | 9.666884 | 12.893093 | 0.000000 | 0.000000 | 6.216606 | 0.000000 | 6.111467 | 4.262706 | ... | 9.342529 | 1.098612 | 2.484907 | 0.000000 | 0.693147 | 2.484907 | 1 | 1 | 1.329136 | 1.083325 |
| 4 | 0 | 3.627270 | 9.278818 | 12.854414 | 0.693147 | 0.000000 | 2.624843 | 0.000000 | 2.302585 | 0.022434 | ... | 11.724328 | 1.609438 | 3.178054 | 0.000000 | 1.945910 | 3.332205 | 1 | 1 | 5.441985 | 1.082732 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 9836 | 1 | 9.444313 | 6.449506 | 10.981038 | 0.000000 | 0.004074 | 2.564949 | 0.382170 | 2.580217 | 2.319085 | ... | 0.000000 | 0.000000 | 1.098612 | 0.000000 | 0.000000 | 1.098612 | 1 | 1 | 0.268264 | 0.693034 |
| 9837 | 1 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | ... | 0.000000 | 0.000000 | 0.693147 | 2.665143 | 0.000000 | 0.693147 | 1 | 1 | 0.693147 | 0.693147 |
| 9838 | 1 | 7.824222 | 7.691789 | 12.474583 | 0.000000 | 0.001077 | 3.105035 | 0.003992 | 2.351375 | 0.653459 | ... | 0.000000 | 0.000000 | 1.791759 | 0.000000 | 0.000000 | 1.791759 | 1 | 1 | 0.939280 | 0.772065 |
| 9839 | 1 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.405465 | 0.405465 | 0.000000 | 0.000000 | 0.000000 | ... | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1 | 1 | 0.000000 | 0.000000 |
| 9840 | 1 | 10.525238 | 5.014362 | 13.416254 | 0.000000 | 0.585135 | 9.852142 | 0.641854 | 6.908755 | 6.469913 | ... | 0.000000 | 0.000000 | 3.637586 | 0.000000 | 0.000000 | 3.761200 | 1 | 1 | 1.945910 | 0.476555 |
9841 rows × 22 columns
y = dataset_log.Y
X = dataset_log.drop('Y', axis = 1)
Splitting of Dataset¶
train_ratio = 0.70
validation_ratio = 0.20
test_ratio = 0.10
# Taken form :https://datascience.stackexchange.com/questions/15135/train-test-validation-set-splitting-in-sklearn
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1 - train_ratio, random_state = 42, stratify = y)
X_val, X_test, y_val, y_test = train_test_split(X_test, y_test, test_size=test_ratio/(test_ratio + validation_ratio),random_state = 1,stratify = y_test)
# Check rows
print("training set shape: ", X_train.shape)
print("validation set shape: ", X_val.shape)
print("testing set shape: ", X_test.shape)
training set shape: (6888, 21) validation set shape: (1968, 21) testing set shape: (985, 21)
Local Preprocessing: Normalisation¶
# Append total ether balance back after log trf step
numeric_columns.append('total ether balance')
scaler = StandardScaler()
X_train[numeric_columns] = scaler.fit_transform(X_train[numeric_columns])
X_val[numeric_columns] = scaler.transform(X_val[numeric_columns])
X_test[numeric_columns] = scaler.transform(X_test[numeric_columns])
training = X_train.copy(deep = True)
training['Class'] = y_train
plt.figure(figsize=(12,28*4))
gs = gridspec.GridSpec(28, 1)
for i, cn in enumerate(training.columns):
ax = plt.subplot(gs[i])
sns.distplot(training[cn][training.Class == 1], bins=50)
sns.distplot(training[cn][training.Class == 0], bins=50)
ax.set_xlabel('')
ax.set_title('histogram of feature: ' + str(cn))
plt.show()
7. Data Augmentation¶
X_train.isnull().sum()
Avg min between sent tnx 0 Avg min between received tnx 0 Time Diff between first and last (Mins) 0 Number of Created Contracts 0 min value received 0 max value received 0 min val sent 0 max val sent 0 avg val sent 0 total ether balance 0 ERC20 total Ether received 0 ERC20 total ether sent 0 ERC20 uniq sent addr 0 ERC20 uniq rec addr 0 ERC20 min val rec 0 ERC20 uniq sent token name 0 ERC20 uniq rec token name 0 ERC20_most_sent_token_valid_name 0 ERC20_most_rec_token_valid_name 0 Sent_to_Received_Ratio 0 Ether_Sent_to_Received_Ratio 0 dtype: int64
sm = SMOTE(random_state = 42)
X_train_res, y_train_res = sm.fit_resample(X_train, y_train)
print("rows of fraud transaction in training set:" , y_train_res[y_train_res == 1].shape)
print("rows of non-fraud transaction in training set:" , y_train_res[y_train_res == 0].shape)
rows of fraud transaction in training set: (5363,) rows of non-fraud transaction in training set: (5363,)
Plots to find any potential pattern
features_to_plot = ["max value received ", "min value received", "Y", "max val sent", "min val sent"]
# Print a title for the plot.
print("Relative Plot of Selected Features: A Data Subset")
# Create a pair plot.
sns.pairplot(dataset[features_to_plot], hue="Y")
# Display the plot.
plt.show()
Relative Plot of Selected Features: A Data Subset
8. Model Architecture¶
Classical Models¶
# Function to evaluate the models, print confusion matrics if evaluated on test set
def evaluate(model, model_name, evaluated_set, test_features, test_labels):
y_pred = model.predict(test_features)
accuracy = accuracy_score(test_labels, y_pred)
precision = precision_score(test_labels, y_pred)
recall = recall_score(test_labels, y_pred)
f1 = f1_score(test_labels, y_pred)
roc_auc = roc_auc_score(test_labels, y_pred)
conf_matrix = confusion_matrix(test_labels, y_pred)
print("Model evaluation for: " + model_name + " on " + evaluated_set)
print(f"Accuracy: {accuracy}")
print(f"Precision: {precision}")
print(f"Recall: {recall}")
print(f"F1 Score: {f1}")
print(f"ROC AUC Score: {roc_auc}")
print(f"Confusion Matrix:\n{conf_matrix}")
if (evaluated_set == "test set"):
labels = [0, 1]
cm = confusion_matrix(test_labels, y_pred, labels = labels)
fig = plt.figure(figsize=(16,8))
plt1 = ConfusionMatrixDisplay(cm, display_labels =labels)
plt1.plot()
return accuracy
# Actual test set confusion metrics for reference
labels = [0, 1]
actual_cm = confusion_matrix(y_test, y_test, labels = labels)
plt2 = ConfusionMatrixDisplay(confusion_matrix= actual_cm, display_labels=labels)
plt2.plot()
<sklearn.metrics._plot.confusion_matrix.ConfusionMatrixDisplay at 0x78f248f02f80>
SVM¶
SVM Model¶
svc = SVC(random_state=42)
svc.fit(X_train_res,y_train_res)
evaluate(svc, "Base Model", "validation set",X_val, y_val )
Model evaluation for: Base Model on validation set Accuracy: 0.9563008130081301 Precision: 0.8723404255319149 Recall: 0.9403669724770642 F1 Score: 0.9050772626931567 ROC AUC Score: 0.9506012408077227 Confusion Matrix: [[1472 60] [ 26 410]]
0.9563008130081301
SVM Hyperparameter Tuning with GridSearchCV¶
For hyperparameter tuning, there are a few things we can change:
- Kernel: different kernel functions are used to map the input data into higher-dimensional space, so we can obtain a non-linear seperation. I.e. Polynomial, RBF and sigmoid. https://www.analyticsvidhya.com/blog/2021/10/support-vector-machinessvm-a-complete-guide-for-beginners/
- C (i.e. regularization parameter): C parameter controls trade-off between maximizing the margin and minimizing classification error. A smaller C value makes the margin wider, but allows more training points to be classified. vice versa
- Gamma Parameter (Kernel coefficient): For non-linear kernels, gamma parameter shapes the shape of the decision boundary - smaller values of gmma lead to a more flexible decision boundary, and vice versa.
- Degree parameter: set for degree of polynomial when kernel used is polynomial. Set the degree of the polynomial used to find the hyperplane to split the data
# grid = {
# 'C':[0.01,0.1,1,10],
# 'kernel' : ["linear","poly","rbf","sigmoid"],
# 'degree' : [1,2,3,4,5,6,7],
# 'gamma' : [0.01,1]
# }
# svc = SVC(random_state=42)
# svm_cv = GridSearchCV(svc, grid, cv=5)
# svm_cv.fit(X_train,y_train)
# print("Best Parameters:",svm_cv.best_params_)
# Running this code takes a very long time (~45minutes) so the results but the results are shown in the text block below.
Best Parameters: {'C': 0.01, 'degree': 3, 'gamma': 1, 'kernel': 'poly'}
# Use the best parameters and compare performance
svc = SVC(random_state=42, C=0.01, degree=3, gamma=1, kernel='poly')
svc.fit(X_train_res,y_train_res)
evaluate(svc, "Tuned Model", "validation set",X_val, y_val )
print("\n")
evaluate(svc, "Tuned Model", "test set", X_test, y_test )
Model evaluation for: Tuned Model on validation set Accuracy: 0.9659552845528455 Precision: 0.9002169197396963 Recall: 0.9518348623853211 F1 Score: 0.9253065774804905 ROC AUC Score: 0.9609043763623734 Confusion Matrix: [[1486 46] [ 21 415]] Model evaluation for: Tuned Model on test set Accuracy: 0.9573604060913705 Precision: 0.8893805309734514 Recall: 0.9220183486238532 F1 Score: 0.9054054054054055 ROC AUC Score: 0.9447119122519527 Confusion Matrix: [[742 25] [ 17 201]]
0.9573604060913705
<Figure size 1600x800 with 0 Axes>
Random Forest¶
We will be merging the training and validation set as the random forest has bootstrapping in place.
Random Forest Model¶
rfc = RandomForestClassifier(random_state = 42)
print('Parameters currently in use:\n')
print(rfc.get_params())
Parameters currently in use:
{'bootstrap': True, 'ccp_alpha': 0.0, 'class_weight': None, 'criterion': 'gini', 'max_depth': None, 'max_features': 'sqrt', 'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100, 'n_jobs': None, 'oob_score': False, 'random_state': 42, 'verbose': 0, 'warm_start': False}
rfc.fit(X_train_res, y_train_res)
evaluate(rfc, "Base Model", "validation set", X_val, y_val)
Model evaluation for: Base Model on validation set Accuracy: 0.9761178861788617 Precision: 0.9471264367816092 Recall: 0.944954128440367 F1 Score: 0.9460390355912743 ROC AUC Score: 0.9649705368050399 Confusion Matrix: [[1509 23] [ 24 412]]
0.9761178861788617
Random Forest Hyperparameter Tuning with RandomSearchCV¶
In view of long search time, we will first conduct a random search. Afterwhich, we will conduct a grid search with parameters close to the best parameters from the random search
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]
max_features = ['auto', 'sqrt']
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
min_samples_split = [2, 5, 10]
min_samples_leaf = [1, 2, 4]
bootstrap = [True]
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
print(random_grid)
{'n_estimators': [200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000], 'max_features': ['auto', 'sqrt'], 'max_depth': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, None], 'min_samples_split': [2, 5, 10], 'min_samples_leaf': [1, 2, 4], 'bootstrap': [True]}
warnings.simplefilter(action='ignore', category=FutureWarning)
model = RandomForestClassifier(random_state = 42)
model_random = RandomizedSearchCV(estimator = model, param_distributions = random_grid, n_iter = 100, cv = 3, verbose = 2, n_jobs = -1, random_state = 42)
model_random.fit(X_train_res, y_train_res)
Fitting 3 folds for each of 100 candidates, totalling 300 fits
RandomizedSearchCV(cv=3, estimator=RandomForestClassifier(random_state=42),
n_iter=100, n_jobs=-1,
param_distributions={'bootstrap': [True],
'max_depth': [10, 20, 30, 40, 50, 60,
70, 80, 90, 100, 110,
None],
'max_features': ['auto', 'sqrt'],
'min_samples_leaf': [1, 2, 4],
'min_samples_split': [2, 5, 10],
'n_estimators': [200, 400, 600, 800,
1000, 1200, 1400, 1600,
1800, 2000]},
random_state=42, verbose=2)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
RandomizedSearchCV(cv=3, estimator=RandomForestClassifier(random_state=42),
n_iter=100, n_jobs=-1,
param_distributions={'bootstrap': [True],
'max_depth': [10, 20, 30, 40, 50, 60,
70, 80, 90, 100, 110,
None],
'max_features': ['auto', 'sqrt'],
'min_samples_leaf': [1, 2, 4],
'min_samples_split': [2, 5, 10],
'n_estimators': [200, 400, 600, 800,
1000, 1200, 1400, 1600,
1800, 2000]},
random_state=42, verbose=2)RandomForestClassifier(random_state=42)
RandomForestClassifier(random_state=42)
model_random.best_params_
{'n_estimators': 2000,
'min_samples_split': 2,
'min_samples_leaf': 1,
'max_features': 'sqrt',
'max_depth': None,
'bootstrap': True}
# Best_random = model_random.best_estimator_
best_random = RandomForestClassifier(n_estimators = 2000, min_samples_split = 2, min_samples_leaf = 1, max_features = "sqrt", max_depth = None, bootstrap = True, random_state = 42)
best_random.fit(X_train_res, y_train_res)
random_accuracy = evaluate(best_random, "Best Random Model", "validation set", X_val, y_val)
Model evaluation for: Best Random Model on validation set Accuracy: 0.9796747967479674 Precision: 0.952054794520548 Recall: 0.9564220183486238 F1 Score: 0.954233409610984 ROC AUC Score: 0.9713572232735286 Confusion Matrix: [[1511 21] [ 19 417]]
Random Forest Hyperparameter Tuning with GridSearchCV¶
param_grid = {
'n_estimators': [1900, 2000, 2100],
'max_depth': [None,1,2],
'max_features' : ['auto', 'sqrt'],
'min_samples_split': [2, 3, 4],
'min_samples_leaf': [1, 2],
'bootstrap': [True],
}
model_grid = RandomForestClassifier(random_state = 42)
grid_search = GridSearchCV(estimator = model_grid, param_grid = param_grid,
cv = 3, verbose = 2, n_jobs = -1)
grid_search.fit(X_train_res, y_train_res)
grid_search.best_params_
Fitting 3 folds for each of 108 candidates, totalling 324 fits
{'bootstrap': True,
'max_depth': None,
'max_features': 'auto',
'min_samples_leaf': 1,
'min_samples_split': 2,
'n_estimators': 2000}
rfc = RandomForestClassifier(n_estimators = 2000, min_samples_split = 2, min_samples_leaf = 1, max_features = 'sqrt',max_depth = None, bootstrap = True, random_state = 42)
rfc.fit(X_train_res, y_train_res)
grid_accuracy = evaluate(rfc, "Tuned Model", "test set", X_test, y_test)
Model evaluation for: Tuned Model on test set Accuracy: 0.9736040609137055 Precision: 0.9571428571428572 Recall: 0.9220183486238532 F1 Score: 0.9392523364485982 ROC AUC Score: 0.9551421599703361 Confusion Matrix: [[758 9] [ 17 201]]
<Figure size 1600x800 with 0 Axes>
Model: CNN¶
# Reshape data into 1D format for a 1D CNN
X_train_res = np.expand_dims(X_train_res, axis=2)
X_val = np.expand_dims(X_val, axis=2)
X_test = np.expand_dims(X_test, axis=2)
# Define the 1D CNN model
model = tf.keras.Sequential([
tf.keras.layers.Conv1D(64, 3, activation='relu', input_shape=(X_train_res.shape[1], 1)),
tf.keras.layers.MaxPooling1D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
history = model.fit(
X_train_res,
y_train_res,
epochs=20,
batch_size=300,
validation_data=(X_val, y_val),
verbose=2
)
# Evaluate the model on the validation set
val_accuracy = history.history['val_accuracy'][-1]
print(f"Validation accuracy for pre-tuned model: {val_accuracy}")
# Evaluate the model on the test set
y_pred = model.predict(X_test)
y_pred_binary = (y_pred > 0.5).astype(int) # Convert probabilities to binary predictions (0 or 1)
accuracy = accuracy_score(y_test, y_pred_binary)
precision = precision_score(y_test, y_pred_binary)
recall = recall_score(y_test, y_pred_binary)
f1 = f1_score(y_test, y_pred_binary)
roc_auc = roc_auc_score(y_test, y_pred_binary)
conf_matrix = confusion_matrix(y_test, y_pred_binary)
print(f"Accuracy: {accuracy}")
print(f"Precision: {precision}")
print(f"Recall: {recall}")
print(f"F1 Score: {f1}")
print(f"ROC AUC Score: {roc_auc}")
print(f"Confusion Matrix:\n{conf_matrix}")
Epoch 1/20 36/36 - 2s - loss: 0.4112 - accuracy: 0.8379 - val_loss: 0.2497 - val_accuracy: 0.9187 - 2s/epoch - 65ms/step Epoch 2/20 36/36 - 1s - loss: 0.2140 - accuracy: 0.9241 - val_loss: 0.1975 - val_accuracy: 0.9299 - 628ms/epoch - 17ms/step Epoch 3/20 36/36 - 1s - loss: 0.1673 - accuracy: 0.9405 - val_loss: 0.1431 - val_accuracy: 0.9487 - 617ms/epoch - 17ms/step Epoch 4/20 36/36 - 1s - loss: 0.1490 - accuracy: 0.9450 - val_loss: 0.1474 - val_accuracy: 0.9461 - 644ms/epoch - 18ms/step Epoch 5/20 36/36 - 1s - loss: 0.1373 - accuracy: 0.9501 - val_loss: 0.1393 - val_accuracy: 0.9492 - 603ms/epoch - 17ms/step Epoch 6/20 36/36 - 0s - loss: 0.1258 - accuracy: 0.9534 - val_loss: 0.1265 - val_accuracy: 0.9517 - 424ms/epoch - 12ms/step Epoch 7/20 36/36 - 0s - loss: 0.1221 - accuracy: 0.9542 - val_loss: 0.1247 - val_accuracy: 0.9512 - 423ms/epoch - 12ms/step Epoch 8/20 36/36 - 0s - loss: 0.1133 - accuracy: 0.9562 - val_loss: 0.1113 - val_accuracy: 0.9548 - 435ms/epoch - 12ms/step Epoch 9/20 36/36 - 0s - loss: 0.1093 - accuracy: 0.9586 - val_loss: 0.1142 - val_accuracy: 0.9533 - 396ms/epoch - 11ms/step Epoch 10/20 36/36 - 0s - loss: 0.1051 - accuracy: 0.9593 - val_loss: 0.1157 - val_accuracy: 0.9533 - 431ms/epoch - 12ms/step Epoch 11/20 36/36 - 0s - loss: 0.1024 - accuracy: 0.9597 - val_loss: 0.1217 - val_accuracy: 0.9492 - 389ms/epoch - 11ms/step Epoch 12/20 36/36 - 0s - loss: 0.0987 - accuracy: 0.9596 - val_loss: 0.1178 - val_accuracy: 0.9522 - 390ms/epoch - 11ms/step Epoch 13/20 36/36 - 0s - loss: 0.0969 - accuracy: 0.9606 - val_loss: 0.1059 - val_accuracy: 0.9578 - 398ms/epoch - 11ms/step Epoch 14/20 36/36 - 0s - loss: 0.0938 - accuracy: 0.9621 - val_loss: 0.0973 - val_accuracy: 0.9593 - 390ms/epoch - 11ms/step Epoch 15/20 36/36 - 0s - loss: 0.0931 - accuracy: 0.9621 - val_loss: 0.0999 - val_accuracy: 0.9619 - 435ms/epoch - 12ms/step Epoch 16/20 36/36 - 0s - loss: 0.0893 - accuracy: 0.9647 - val_loss: 0.1098 - val_accuracy: 0.9548 - 402ms/epoch - 11ms/step Epoch 17/20 36/36 - 0s - loss: 0.0842 - accuracy: 0.9661 - val_loss: 0.0942 - val_accuracy: 0.9619 - 422ms/epoch - 12ms/step Epoch 18/20 36/36 - 0s - loss: 0.0836 - accuracy: 0.9663 - val_loss: 0.0978 - val_accuracy: 0.9599 - 427ms/epoch - 12ms/step Epoch 19/20 36/36 - 0s - loss: 0.0832 - accuracy: 0.9664 - val_loss: 0.0998 - val_accuracy: 0.9604 - 383ms/epoch - 11ms/step Epoch 20/20 36/36 - 0s - loss: 0.0822 - accuracy: 0.9661 - val_loss: 0.0889 - val_accuracy: 0.9654 - 385ms/epoch - 11ms/step Validation accuracy for pre-tuned model: 0.9654471278190613 31/31 [==============================] - 0s 2ms/step Accuracy: 0.9614213197969543 Precision: 0.9090909090909091 Recall: 0.9174311926605505 F1 Score: 0.9132420091324202 ROC AUC Score: 0.9456777866822961 Confusion Matrix: [[747 20] [ 18 200]]
9. Model Tuning¶
# Hyperparameter tuning
learning_rates = [0.001, 0.01, 0.1]
num_filters = [32, 64, 128]
dropout_rate = [0.3, 0.5, 0.7]
filter_size = [3, 5, 7]
best_val_accuracy = 0
best_model = None
best_model_parameters = {}
num_epochs = 20
batch_size = 300
for lr in learning_rates:
for filter in num_filters:
for dr in dropout_rate:
for size in filter_size:
model = tf.keras.Sequential([
tf.keras.layers.Conv1D(filter, size, activation='relu', input_shape=(X_train_res.shape[1], 1)),
tf.keras.layers.MaxPooling1D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(dr),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=lr), loss='binary_crossentropy', metrics=['accuracy'])
# Train the model on the training data
history = model.fit(
X_train_res,
y_train_res,
epochs=num_epochs,
batch_size=batch_size,
validation_data=(X_val, y_val),
verbose=2
)
# Evaluate the model on the validation set
val_accuracy = history.history['val_accuracy'][-1]
print(f"Validation accuracy for Model of [Learning Rate {lr} | Num Filters {filter} | Dropout Rate {dr} | FilterSize {size}]: {val_accuracy}")
if val_accuracy > best_val_accuracy:
best_val_accuracy = val_accuracy
best_model = model
best_model_parameters = {
"learning_rate": lr,
"num_filters": filter,
"droupout_rate": dr,
"filter_size": size
}
Epoch 1/20 36/36 - 1s - loss: 0.4271 - accuracy: 0.8409 - val_loss: 0.2740 - val_accuracy: 0.9167 - 1s/epoch - 35ms/step Epoch 2/20 36/36 - 0s - loss: 0.2321 - accuracy: 0.9177 - val_loss: 0.1943 - val_accuracy: 0.9350 - 290ms/epoch - 8ms/step Epoch 3/20 36/36 - 0s - loss: 0.1792 - accuracy: 0.9397 - val_loss: 0.1572 - val_accuracy: 0.9446 - 276ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.1554 - accuracy: 0.9464 - val_loss: 0.1325 - val_accuracy: 0.9543 - 292ms/epoch - 8ms/step Epoch 5/20 36/36 - 0s - loss: 0.1404 - accuracy: 0.9502 - val_loss: 0.1385 - val_accuracy: 0.9512 - 275ms/epoch - 8ms/step Epoch 6/20 36/36 - 0s - loss: 0.1346 - accuracy: 0.9504 - val_loss: 0.1353 - val_accuracy: 0.9512 - 290ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.1258 - accuracy: 0.9539 - val_loss: 0.1330 - val_accuracy: 0.9472 - 343ms/epoch - 10ms/step Epoch 8/20 36/36 - 0s - loss: 0.1201 - accuracy: 0.9549 - val_loss: 0.1312 - val_accuracy: 0.9472 - 454ms/epoch - 13ms/step Epoch 9/20 36/36 - 0s - loss: 0.1162 - accuracy: 0.9550 - val_loss: 0.1132 - val_accuracy: 0.9548 - 474ms/epoch - 13ms/step Epoch 10/20 36/36 - 0s - loss: 0.1109 - accuracy: 0.9572 - val_loss: 0.1207 - val_accuracy: 0.9512 - 491ms/epoch - 14ms/step Epoch 11/20 36/36 - 0s - loss: 0.1029 - accuracy: 0.9594 - val_loss: 0.1123 - val_accuracy: 0.9543 - 487ms/epoch - 14ms/step Epoch 12/20 36/36 - 0s - loss: 0.1032 - accuracy: 0.9590 - val_loss: 0.1267 - val_accuracy: 0.9482 - 480ms/epoch - 13ms/step Epoch 13/20 36/36 - 0s - loss: 0.0988 - accuracy: 0.9621 - val_loss: 0.1057 - val_accuracy: 0.9553 - 496ms/epoch - 14ms/step Epoch 14/20 36/36 - 0s - loss: 0.0951 - accuracy: 0.9622 - val_loss: 0.1055 - val_accuracy: 0.9578 - 453ms/epoch - 13ms/step Epoch 15/20 36/36 - 0s - loss: 0.0918 - accuracy: 0.9632 - val_loss: 0.1133 - val_accuracy: 0.9533 - 465ms/epoch - 13ms/step Epoch 16/20 36/36 - 0s - loss: 0.0927 - accuracy: 0.9619 - val_loss: 0.1042 - val_accuracy: 0.9573 - 464ms/epoch - 13ms/step Epoch 17/20 36/36 - 0s - loss: 0.0878 - accuracy: 0.9642 - val_loss: 0.1167 - val_accuracy: 0.9507 - 424ms/epoch - 12ms/step Epoch 18/20 36/36 - 0s - loss: 0.0888 - accuracy: 0.9647 - val_loss: 0.1050 - val_accuracy: 0.9583 - 275ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.0843 - accuracy: 0.9657 - val_loss: 0.0929 - val_accuracy: 0.9629 - 330ms/epoch - 9ms/step Epoch 20/20 36/36 - 0s - loss: 0.0830 - accuracy: 0.9663 - val_loss: 0.0931 - val_accuracy: 0.9614 - 290ms/epoch - 8ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 32 | Dropout Rate 0.3 | FilterSize 3]: 0.9613820910453796 Epoch 1/20 36/36 - 1s - loss: 0.4660 - accuracy: 0.7946 - val_loss: 0.3023 - val_accuracy: 0.8933 - 1s/epoch - 36ms/step Epoch 2/20 36/36 - 0s - loss: 0.2607 - accuracy: 0.9057 - val_loss: 0.2183 - val_accuracy: 0.9263 - 291ms/epoch - 8ms/step Epoch 3/20 36/36 - 0s - loss: 0.1901 - accuracy: 0.9338 - val_loss: 0.1457 - val_accuracy: 0.9466 - 273ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.1579 - accuracy: 0.9422 - val_loss: 0.1459 - val_accuracy: 0.9441 - 282ms/epoch - 8ms/step Epoch 5/20 36/36 - 0s - loss: 0.1375 - accuracy: 0.9497 - val_loss: 0.1308 - val_accuracy: 0.9517 - 275ms/epoch - 8ms/step Epoch 6/20 36/36 - 0s - loss: 0.1264 - accuracy: 0.9532 - val_loss: 0.1196 - val_accuracy: 0.9548 - 287ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.1217 - accuracy: 0.9533 - val_loss: 0.1553 - val_accuracy: 0.9421 - 279ms/epoch - 8ms/step Epoch 8/20 36/36 - 0s - loss: 0.1114 - accuracy: 0.9566 - val_loss: 0.1207 - val_accuracy: 0.9492 - 278ms/epoch - 8ms/step Epoch 9/20 36/36 - 0s - loss: 0.1048 - accuracy: 0.9591 - val_loss: 0.1164 - val_accuracy: 0.9507 - 289ms/epoch - 8ms/step Epoch 10/20 36/36 - 0s - loss: 0.1021 - accuracy: 0.9596 - val_loss: 0.0967 - val_accuracy: 0.9619 - 321ms/epoch - 9ms/step Epoch 11/20 36/36 - 0s - loss: 0.0987 - accuracy: 0.9601 - val_loss: 0.1023 - val_accuracy: 0.9578 - 284ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.0950 - accuracy: 0.9620 - val_loss: 0.1077 - val_accuracy: 0.9568 - 271ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.0903 - accuracy: 0.9639 - val_loss: 0.1013 - val_accuracy: 0.9568 - 286ms/epoch - 8ms/step Epoch 14/20 36/36 - 0s - loss: 0.0879 - accuracy: 0.9644 - val_loss: 0.1053 - val_accuracy: 0.9563 - 280ms/epoch - 8ms/step Epoch 15/20 36/36 - 0s - loss: 0.0842 - accuracy: 0.9662 - val_loss: 0.0909 - val_accuracy: 0.9649 - 272ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.0825 - accuracy: 0.9681 - val_loss: 0.0916 - val_accuracy: 0.9675 - 320ms/epoch - 9ms/step Epoch 17/20 36/36 - 0s - loss: 0.0822 - accuracy: 0.9672 - val_loss: 0.1256 - val_accuracy: 0.9461 - 283ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.0788 - accuracy: 0.9690 - val_loss: 0.1151 - val_accuracy: 0.9507 - 272ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.0782 - accuracy: 0.9688 - val_loss: 0.0886 - val_accuracy: 0.9675 - 307ms/epoch - 9ms/step Epoch 20/20 36/36 - 0s - loss: 0.0776 - accuracy: 0.9693 - val_loss: 0.0956 - val_accuracy: 0.9629 - 438ms/epoch - 12ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 32 | Dropout Rate 0.3 | FilterSize 5]: 0.9629064798355103 Epoch 1/20 36/36 - 1s - loss: 0.4879 - accuracy: 0.7863 - val_loss: 0.3615 - val_accuracy: 0.8572 - 1s/epoch - 34ms/step Epoch 2/20 36/36 - 0s - loss: 0.2843 - accuracy: 0.8993 - val_loss: 0.2261 - val_accuracy: 0.9248 - 316ms/epoch - 9ms/step Epoch 3/20 36/36 - 0s - loss: 0.2022 - accuracy: 0.9298 - val_loss: 0.1840 - val_accuracy: 0.9324 - 286ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.1661 - accuracy: 0.9420 - val_loss: 0.1690 - val_accuracy: 0.9334 - 268ms/epoch - 7ms/step Epoch 5/20 36/36 - 0s - loss: 0.1469 - accuracy: 0.9480 - val_loss: 0.1342 - val_accuracy: 0.9497 - 260ms/epoch - 7ms/step Epoch 6/20 36/36 - 0s - loss: 0.1333 - accuracy: 0.9512 - val_loss: 0.1361 - val_accuracy: 0.9502 - 270ms/epoch - 7ms/step Epoch 7/20 36/36 - 0s - loss: 0.1232 - accuracy: 0.9535 - val_loss: 0.1361 - val_accuracy: 0.9512 - 280ms/epoch - 8ms/step Epoch 8/20 36/36 - 0s - loss: 0.1155 - accuracy: 0.9571 - val_loss: 0.1167 - val_accuracy: 0.9548 - 316ms/epoch - 9ms/step Epoch 9/20 36/36 - 0s - loss: 0.1092 - accuracy: 0.9594 - val_loss: 0.1304 - val_accuracy: 0.9472 - 255ms/epoch - 7ms/step Epoch 10/20 36/36 - 0s - loss: 0.1044 - accuracy: 0.9599 - val_loss: 0.1099 - val_accuracy: 0.9553 - 313ms/epoch - 9ms/step Epoch 11/20 36/36 - 0s - loss: 0.0995 - accuracy: 0.9605 - val_loss: 0.1175 - val_accuracy: 0.9517 - 258ms/epoch - 7ms/step Epoch 12/20 36/36 - 0s - loss: 0.0969 - accuracy: 0.9609 - val_loss: 0.0931 - val_accuracy: 0.9644 - 268ms/epoch - 7ms/step Epoch 13/20 36/36 - 0s - loss: 0.0962 - accuracy: 0.9619 - val_loss: 0.1088 - val_accuracy: 0.9533 - 283ms/epoch - 8ms/step Epoch 14/20 36/36 - 0s - loss: 0.0897 - accuracy: 0.9651 - val_loss: 0.1081 - val_accuracy: 0.9538 - 290ms/epoch - 8ms/step Epoch 15/20 36/36 - 0s - loss: 0.0874 - accuracy: 0.9657 - val_loss: 0.1101 - val_accuracy: 0.9512 - 271ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.0821 - accuracy: 0.9664 - val_loss: 0.0954 - val_accuracy: 0.9609 - 310ms/epoch - 9ms/step Epoch 17/20 36/36 - 0s - loss: 0.0789 - accuracy: 0.9670 - val_loss: 0.1042 - val_accuracy: 0.9553 - 277ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.0783 - accuracy: 0.9683 - val_loss: 0.0884 - val_accuracy: 0.9654 - 261ms/epoch - 7ms/step Epoch 19/20 36/36 - 0s - loss: 0.0784 - accuracy: 0.9686 - val_loss: 0.0919 - val_accuracy: 0.9604 - 253ms/epoch - 7ms/step Epoch 20/20 36/36 - 0s - loss: 0.0731 - accuracy: 0.9705 - val_loss: 0.0931 - val_accuracy: 0.9609 - 258ms/epoch - 7ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 32 | Dropout Rate 0.3 | FilterSize 7]: 0.9608739614486694 Epoch 1/20 36/36 - 1s - loss: 0.4754 - accuracy: 0.7887 - val_loss: 0.3060 - val_accuracy: 0.9101 - 1s/epoch - 34ms/step Epoch 2/20 36/36 - 0s - loss: 0.2609 - accuracy: 0.9047 - val_loss: 0.2105 - val_accuracy: 0.9268 - 276ms/epoch - 8ms/step Epoch 3/20 36/36 - 0s - loss: 0.1887 - accuracy: 0.9366 - val_loss: 0.1602 - val_accuracy: 0.9436 - 408ms/epoch - 11ms/step Epoch 4/20 36/36 - 0s - loss: 0.1647 - accuracy: 0.9416 - val_loss: 0.1435 - val_accuracy: 0.9472 - 475ms/epoch - 13ms/step Epoch 5/20 36/36 - 0s - loss: 0.1498 - accuracy: 0.9480 - val_loss: 0.1542 - val_accuracy: 0.9421 - 426ms/epoch - 12ms/step Epoch 6/20 36/36 - 0s - loss: 0.1419 - accuracy: 0.9509 - val_loss: 0.1338 - val_accuracy: 0.9512 - 442ms/epoch - 12ms/step Epoch 7/20 36/36 - 0s - loss: 0.1324 - accuracy: 0.9527 - val_loss: 0.1438 - val_accuracy: 0.9436 - 455ms/epoch - 13ms/step Epoch 8/20 36/36 - 0s - loss: 0.1277 - accuracy: 0.9529 - val_loss: 0.1172 - val_accuracy: 0.9548 - 430ms/epoch - 12ms/step Epoch 9/20 36/36 - 0s - loss: 0.1217 - accuracy: 0.9534 - val_loss: 0.1244 - val_accuracy: 0.9512 - 471ms/epoch - 13ms/step Epoch 10/20 36/36 - 0s - loss: 0.1176 - accuracy: 0.9563 - val_loss: 0.1193 - val_accuracy: 0.9507 - 448ms/epoch - 12ms/step Epoch 11/20 36/36 - 0s - loss: 0.1125 - accuracy: 0.9569 - val_loss: 0.1108 - val_accuracy: 0.9568 - 471ms/epoch - 13ms/step Epoch 12/20 36/36 - 0s - loss: 0.1083 - accuracy: 0.9586 - val_loss: 0.1047 - val_accuracy: 0.9599 - 461ms/epoch - 13ms/step Epoch 13/20 36/36 - 0s - loss: 0.1072 - accuracy: 0.9592 - val_loss: 0.1142 - val_accuracy: 0.9563 - 441ms/epoch - 12ms/step Epoch 14/20 36/36 - 0s - loss: 0.1039 - accuracy: 0.9596 - val_loss: 0.1005 - val_accuracy: 0.9599 - 261ms/epoch - 7ms/step Epoch 15/20 36/36 - 0s - loss: 0.0986 - accuracy: 0.9616 - val_loss: 0.1108 - val_accuracy: 0.9558 - 266ms/epoch - 7ms/step Epoch 16/20 36/36 - 0s - loss: 0.0974 - accuracy: 0.9635 - val_loss: 0.1029 - val_accuracy: 0.9588 - 268ms/epoch - 7ms/step Epoch 17/20 36/36 - 0s - loss: 0.0944 - accuracy: 0.9630 - val_loss: 0.1086 - val_accuracy: 0.9558 - 287ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.0918 - accuracy: 0.9630 - val_loss: 0.1060 - val_accuracy: 0.9573 - 271ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.0920 - accuracy: 0.9642 - val_loss: 0.0978 - val_accuracy: 0.9609 - 289ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.0906 - accuracy: 0.9631 - val_loss: 0.1054 - val_accuracy: 0.9583 - 330ms/epoch - 9ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 32 | Dropout Rate 0.5 | FilterSize 3]: 0.9583333134651184 Epoch 1/20 36/36 - 1s - loss: 0.4788 - accuracy: 0.7969 - val_loss: 0.3236 - val_accuracy: 0.8933 - 1s/epoch - 34ms/step Epoch 2/20 36/36 - 0s - loss: 0.2768 - accuracy: 0.8990 - val_loss: 0.2203 - val_accuracy: 0.9273 - 278ms/epoch - 8ms/step Epoch 3/20 36/36 - 0s - loss: 0.2034 - accuracy: 0.9304 - val_loss: 0.1937 - val_accuracy: 0.9314 - 281ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.1706 - accuracy: 0.9407 - val_loss: 0.1593 - val_accuracy: 0.9400 - 279ms/epoch - 8ms/step Epoch 5/20 36/36 - 0s - loss: 0.1499 - accuracy: 0.9459 - val_loss: 0.1306 - val_accuracy: 0.9522 - 277ms/epoch - 8ms/step Epoch 6/20 36/36 - 0s - loss: 0.1360 - accuracy: 0.9500 - val_loss: 0.1335 - val_accuracy: 0.9487 - 271ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.1288 - accuracy: 0.9522 - val_loss: 0.1341 - val_accuracy: 0.9482 - 299ms/epoch - 8ms/step Epoch 8/20 36/36 - 0s - loss: 0.1212 - accuracy: 0.9536 - val_loss: 0.1144 - val_accuracy: 0.9558 - 272ms/epoch - 8ms/step Epoch 9/20 36/36 - 0s - loss: 0.1184 - accuracy: 0.9552 - val_loss: 0.1568 - val_accuracy: 0.9416 - 287ms/epoch - 8ms/step Epoch 10/20 36/36 - 0s - loss: 0.1113 - accuracy: 0.9576 - val_loss: 0.1061 - val_accuracy: 0.9573 - 279ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.1095 - accuracy: 0.9575 - val_loss: 0.1103 - val_accuracy: 0.9527 - 338ms/epoch - 9ms/step Epoch 12/20 36/36 - 0s - loss: 0.1028 - accuracy: 0.9596 - val_loss: 0.1051 - val_accuracy: 0.9578 - 274ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.1039 - accuracy: 0.9587 - val_loss: 0.0993 - val_accuracy: 0.9599 - 274ms/epoch - 8ms/step Epoch 14/20 36/36 - 0s - loss: 0.0974 - accuracy: 0.9615 - val_loss: 0.0980 - val_accuracy: 0.9624 - 285ms/epoch - 8ms/step Epoch 15/20 36/36 - 0s - loss: 0.0967 - accuracy: 0.9607 - val_loss: 0.1082 - val_accuracy: 0.9578 - 279ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.0910 - accuracy: 0.9640 - val_loss: 0.1061 - val_accuracy: 0.9553 - 270ms/epoch - 7ms/step Epoch 17/20 36/36 - 0s - loss: 0.0913 - accuracy: 0.9619 - val_loss: 0.1034 - val_accuracy: 0.9548 - 275ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.0880 - accuracy: 0.9658 - val_loss: 0.1078 - val_accuracy: 0.9527 - 270ms/epoch - 7ms/step Epoch 19/20 36/36 - 0s - loss: 0.0861 - accuracy: 0.9626 - val_loss: 0.1068 - val_accuracy: 0.9588 - 272ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.0852 - accuracy: 0.9661 - val_loss: 0.0962 - val_accuracy: 0.9624 - 266ms/epoch - 7ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 32 | Dropout Rate 0.5 | FilterSize 5]: 0.9623983502388 Epoch 1/20 36/36 - 1s - loss: 0.4954 - accuracy: 0.7739 - val_loss: 0.3384 - val_accuracy: 0.8699 - 1s/epoch - 35ms/step Epoch 2/20 36/36 - 0s - loss: 0.2844 - accuracy: 0.8928 - val_loss: 0.2352 - val_accuracy: 0.9197 - 268ms/epoch - 7ms/step Epoch 3/20 36/36 - 0s - loss: 0.2056 - accuracy: 0.9260 - val_loss: 0.1822 - val_accuracy: 0.9299 - 269ms/epoch - 7ms/step Epoch 4/20 36/36 - 0s - loss: 0.1742 - accuracy: 0.9352 - val_loss: 0.1664 - val_accuracy: 0.9360 - 277ms/epoch - 8ms/step Epoch 5/20 36/36 - 0s - loss: 0.1548 - accuracy: 0.9421 - val_loss: 0.1484 - val_accuracy: 0.9456 - 264ms/epoch - 7ms/step Epoch 6/20 36/36 - 0s - loss: 0.1394 - accuracy: 0.9475 - val_loss: 0.1333 - val_accuracy: 0.9482 - 258ms/epoch - 7ms/step Epoch 7/20 36/36 - 0s - loss: 0.1297 - accuracy: 0.9509 - val_loss: 0.1257 - val_accuracy: 0.9527 - 269ms/epoch - 7ms/step Epoch 8/20 36/36 - 0s - loss: 0.1224 - accuracy: 0.9527 - val_loss: 0.1200 - val_accuracy: 0.9553 - 267ms/epoch - 7ms/step Epoch 9/20 36/36 - 0s - loss: 0.1143 - accuracy: 0.9567 - val_loss: 0.1261 - val_accuracy: 0.9517 - 273ms/epoch - 8ms/step Epoch 10/20 36/36 - 0s - loss: 0.1095 - accuracy: 0.9585 - val_loss: 0.1294 - val_accuracy: 0.9492 - 275ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.1044 - accuracy: 0.9601 - val_loss: 0.1095 - val_accuracy: 0.9563 - 300ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.1024 - accuracy: 0.9611 - val_loss: 0.1166 - val_accuracy: 0.9543 - 281ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.0957 - accuracy: 0.9632 - val_loss: 0.1171 - val_accuracy: 0.9543 - 319ms/epoch - 9ms/step Epoch 14/20 36/36 - 0s - loss: 0.0941 - accuracy: 0.9629 - val_loss: 0.1217 - val_accuracy: 0.9512 - 336ms/epoch - 9ms/step Epoch 15/20 36/36 - 0s - loss: 0.0914 - accuracy: 0.9649 - val_loss: 0.1184 - val_accuracy: 0.9507 - 284ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.0893 - accuracy: 0.9649 - val_loss: 0.1116 - val_accuracy: 0.9543 - 266ms/epoch - 7ms/step Epoch 17/20 36/36 - 0s - loss: 0.0861 - accuracy: 0.9651 - val_loss: 0.0979 - val_accuracy: 0.9624 - 269ms/epoch - 7ms/step Epoch 18/20 36/36 - 0s - loss: 0.0844 - accuracy: 0.9673 - val_loss: 0.0945 - val_accuracy: 0.9639 - 271ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.0820 - accuracy: 0.9666 - val_loss: 0.1062 - val_accuracy: 0.9568 - 270ms/epoch - 7ms/step Epoch 20/20 36/36 - 0s - loss: 0.0805 - accuracy: 0.9697 - val_loss: 0.1023 - val_accuracy: 0.9593 - 267ms/epoch - 7ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 32 | Dropout Rate 0.5 | FilterSize 7]: 0.9593495726585388 Epoch 1/20 36/36 - 1s - loss: 0.4642 - accuracy: 0.7955 - val_loss: 0.2887 - val_accuracy: 0.9101 - 1s/epoch - 34ms/step Epoch 2/20 36/36 - 0s - loss: 0.2555 - accuracy: 0.9073 - val_loss: 0.1932 - val_accuracy: 0.9345 - 291ms/epoch - 8ms/step Epoch 3/20 36/36 - 0s - loss: 0.2003 - accuracy: 0.9290 - val_loss: 0.1630 - val_accuracy: 0.9436 - 279ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.1743 - accuracy: 0.9387 - val_loss: 0.1594 - val_accuracy: 0.9431 - 289ms/epoch - 8ms/step Epoch 5/20 36/36 - 0s - loss: 0.1601 - accuracy: 0.9434 - val_loss: 0.1554 - val_accuracy: 0.9431 - 326ms/epoch - 9ms/step Epoch 6/20 36/36 - 0s - loss: 0.1509 - accuracy: 0.9446 - val_loss: 0.1310 - val_accuracy: 0.9502 - 284ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.1453 - accuracy: 0.9474 - val_loss: 0.1343 - val_accuracy: 0.9502 - 407ms/epoch - 11ms/step Epoch 8/20 36/36 - 0s - loss: 0.1410 - accuracy: 0.9485 - val_loss: 0.1286 - val_accuracy: 0.9497 - 446ms/epoch - 12ms/step Epoch 9/20 36/36 - 0s - loss: 0.1351 - accuracy: 0.9522 - val_loss: 0.1337 - val_accuracy: 0.9497 - 449ms/epoch - 12ms/step Epoch 10/20 36/36 - 1s - loss: 0.1325 - accuracy: 0.9522 - val_loss: 0.1243 - val_accuracy: 0.9533 - 507ms/epoch - 14ms/step Epoch 11/20 36/36 - 0s - loss: 0.1243 - accuracy: 0.9544 - val_loss: 0.1314 - val_accuracy: 0.9502 - 456ms/epoch - 13ms/step Epoch 12/20 36/36 - 0s - loss: 0.1245 - accuracy: 0.9533 - val_loss: 0.1282 - val_accuracy: 0.9517 - 463ms/epoch - 13ms/step Epoch 13/20 36/36 - 0s - loss: 0.1215 - accuracy: 0.9546 - val_loss: 0.1215 - val_accuracy: 0.9543 - 449ms/epoch - 12ms/step Epoch 14/20 36/36 - 0s - loss: 0.1178 - accuracy: 0.9580 - val_loss: 0.1188 - val_accuracy: 0.9573 - 471ms/epoch - 13ms/step Epoch 15/20 36/36 - 0s - loss: 0.1160 - accuracy: 0.9555 - val_loss: 0.1236 - val_accuracy: 0.9548 - 485ms/epoch - 13ms/step Epoch 16/20 36/36 - 0s - loss: 0.1102 - accuracy: 0.9572 - val_loss: 0.1197 - val_accuracy: 0.9548 - 471ms/epoch - 13ms/step Epoch 17/20 36/36 - 0s - loss: 0.1086 - accuracy: 0.9574 - val_loss: 0.1222 - val_accuracy: 0.9522 - 389ms/epoch - 11ms/step Epoch 18/20 36/36 - 0s - loss: 0.1082 - accuracy: 0.9580 - val_loss: 0.1034 - val_accuracy: 0.9578 - 280ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.1093 - accuracy: 0.9573 - val_loss: 0.1137 - val_accuracy: 0.9553 - 272ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.1061 - accuracy: 0.9585 - val_loss: 0.0990 - val_accuracy: 0.9619 - 286ms/epoch - 8ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 32 | Dropout Rate 0.7 | FilterSize 3]: 0.9618902206420898 Epoch 1/20 36/36 - 1s - loss: 0.4929 - accuracy: 0.7904 - val_loss: 0.3399 - val_accuracy: 0.8664 - 1s/epoch - 34ms/step Epoch 2/20 36/36 - 0s - loss: 0.2934 - accuracy: 0.8975 - val_loss: 0.2357 - val_accuracy: 0.9268 - 309ms/epoch - 9ms/step Epoch 3/20 36/36 - 0s - loss: 0.2173 - accuracy: 0.9273 - val_loss: 0.1936 - val_accuracy: 0.9314 - 281ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.1882 - accuracy: 0.9342 - val_loss: 0.1623 - val_accuracy: 0.9441 - 281ms/epoch - 8ms/step Epoch 5/20 36/36 - 0s - loss: 0.1633 - accuracy: 0.9422 - val_loss: 0.1493 - val_accuracy: 0.9441 - 284ms/epoch - 8ms/step Epoch 6/20 36/36 - 0s - loss: 0.1490 - accuracy: 0.9442 - val_loss: 0.1381 - val_accuracy: 0.9507 - 290ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.1388 - accuracy: 0.9486 - val_loss: 0.1235 - val_accuracy: 0.9533 - 275ms/epoch - 8ms/step Epoch 8/20 36/36 - 0s - loss: 0.1324 - accuracy: 0.9507 - val_loss: 0.1489 - val_accuracy: 0.9446 - 298ms/epoch - 8ms/step Epoch 9/20 36/36 - 0s - loss: 0.1304 - accuracy: 0.9507 - val_loss: 0.1307 - val_accuracy: 0.9517 - 269ms/epoch - 7ms/step Epoch 10/20 36/36 - 0s - loss: 0.1205 - accuracy: 0.9539 - val_loss: 0.1247 - val_accuracy: 0.9517 - 277ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.1161 - accuracy: 0.9576 - val_loss: 0.1193 - val_accuracy: 0.9548 - 337ms/epoch - 9ms/step Epoch 12/20 36/36 - 0s - loss: 0.1146 - accuracy: 0.9560 - val_loss: 0.1177 - val_accuracy: 0.9543 - 269ms/epoch - 7ms/step Epoch 13/20 36/36 - 0s - loss: 0.1096 - accuracy: 0.9563 - val_loss: 0.1132 - val_accuracy: 0.9578 - 272ms/epoch - 8ms/step Epoch 14/20 36/36 - 0s - loss: 0.1064 - accuracy: 0.9578 - val_loss: 0.1068 - val_accuracy: 0.9599 - 268ms/epoch - 7ms/step Epoch 15/20 36/36 - 0s - loss: 0.1062 - accuracy: 0.9569 - val_loss: 0.0993 - val_accuracy: 0.9599 - 287ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.1051 - accuracy: 0.9589 - val_loss: 0.1132 - val_accuracy: 0.9543 - 269ms/epoch - 7ms/step Epoch 17/20 36/36 - 0s - loss: 0.1004 - accuracy: 0.9614 - val_loss: 0.1070 - val_accuracy: 0.9593 - 273ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.0996 - accuracy: 0.9596 - val_loss: 0.1012 - val_accuracy: 0.9614 - 278ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.0966 - accuracy: 0.9603 - val_loss: 0.1168 - val_accuracy: 0.9507 - 292ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.0938 - accuracy: 0.9631 - val_loss: 0.0994 - val_accuracy: 0.9624 - 282ms/epoch - 8ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 32 | Dropout Rate 0.7 | FilterSize 5]: 0.9623983502388 Epoch 1/20 36/36 - 1s - loss: 0.5195 - accuracy: 0.7626 - val_loss: 0.3782 - val_accuracy: 0.8521 - 1s/epoch - 41ms/step Epoch 2/20 36/36 - 0s - loss: 0.3182 - accuracy: 0.8860 - val_loss: 0.2327 - val_accuracy: 0.9212 - 275ms/epoch - 8ms/step Epoch 3/20 36/36 - 0s - loss: 0.2274 - accuracy: 0.9194 - val_loss: 0.1837 - val_accuracy: 0.9360 - 267ms/epoch - 7ms/step Epoch 4/20 36/36 - 0s - loss: 0.1858 - accuracy: 0.9327 - val_loss: 0.1706 - val_accuracy: 0.9400 - 322ms/epoch - 9ms/step Epoch 5/20 36/36 - 0s - loss: 0.1628 - accuracy: 0.9408 - val_loss: 0.1467 - val_accuracy: 0.9451 - 268ms/epoch - 7ms/step Epoch 6/20 36/36 - 0s - loss: 0.1522 - accuracy: 0.9442 - val_loss: 0.1313 - val_accuracy: 0.9527 - 268ms/epoch - 7ms/step Epoch 7/20 36/36 - 0s - loss: 0.1409 - accuracy: 0.9498 - val_loss: 0.1295 - val_accuracy: 0.9548 - 285ms/epoch - 8ms/step Epoch 8/20 36/36 - 0s - loss: 0.1287 - accuracy: 0.9529 - val_loss: 0.1358 - val_accuracy: 0.9482 - 266ms/epoch - 7ms/step Epoch 9/20 36/36 - 0s - loss: 0.1290 - accuracy: 0.9536 - val_loss: 0.1249 - val_accuracy: 0.9543 - 275ms/epoch - 8ms/step Epoch 10/20 36/36 - 0s - loss: 0.1239 - accuracy: 0.9556 - val_loss: 0.1163 - val_accuracy: 0.9583 - 266ms/epoch - 7ms/step Epoch 11/20 36/36 - 0s - loss: 0.1177 - accuracy: 0.9545 - val_loss: 0.1114 - val_accuracy: 0.9604 - 287ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.1164 - accuracy: 0.9571 - val_loss: 0.1277 - val_accuracy: 0.9517 - 271ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.1091 - accuracy: 0.9590 - val_loss: 0.1134 - val_accuracy: 0.9573 - 269ms/epoch - 7ms/step Epoch 14/20 36/36 - 0s - loss: 0.1081 - accuracy: 0.9600 - val_loss: 0.1195 - val_accuracy: 0.9538 - 276ms/epoch - 8ms/step Epoch 15/20 36/36 - 0s - loss: 0.1071 - accuracy: 0.9580 - val_loss: 0.1088 - val_accuracy: 0.9553 - 289ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.1033 - accuracy: 0.9606 - val_loss: 0.1161 - val_accuracy: 0.9548 - 284ms/epoch - 8ms/step Epoch 17/20 36/36 - 0s - loss: 0.0996 - accuracy: 0.9615 - val_loss: 0.1335 - val_accuracy: 0.9456 - 267ms/epoch - 7ms/step Epoch 18/20 36/36 - 0s - loss: 0.0983 - accuracy: 0.9613 - val_loss: 0.1031 - val_accuracy: 0.9604 - 277ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.0966 - accuracy: 0.9619 - val_loss: 0.1249 - val_accuracy: 0.9512 - 271ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.0975 - accuracy: 0.9632 - val_loss: 0.1037 - val_accuracy: 0.9593 - 270ms/epoch - 7ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 32 | Dropout Rate 0.7 | FilterSize 7]: 0.9593495726585388 Epoch 1/20 36/36 - 1s - loss: 0.3899 - accuracy: 0.8417 - val_loss: 0.2616 - val_accuracy: 0.9075 - 1s/epoch - 37ms/step Epoch 2/20 36/36 - 0s - loss: 0.2042 - accuracy: 0.9291 - val_loss: 0.2014 - val_accuracy: 0.9284 - 387ms/epoch - 11ms/step Epoch 3/20 36/36 - 0s - loss: 0.1625 - accuracy: 0.9426 - val_loss: 0.1428 - val_accuracy: 0.9497 - 424ms/epoch - 12ms/step Epoch 4/20 36/36 - 0s - loss: 0.1466 - accuracy: 0.9477 - val_loss: 0.1440 - val_accuracy: 0.9431 - 382ms/epoch - 11ms/step Epoch 5/20 36/36 - 0s - loss: 0.1342 - accuracy: 0.9502 - val_loss: 0.1464 - val_accuracy: 0.9446 - 410ms/epoch - 11ms/step Epoch 6/20 36/36 - 0s - loss: 0.1267 - accuracy: 0.9539 - val_loss: 0.1504 - val_accuracy: 0.9431 - 389ms/epoch - 11ms/step Epoch 7/20 36/36 - 0s - loss: 0.1209 - accuracy: 0.9552 - val_loss: 0.1174 - val_accuracy: 0.9548 - 394ms/epoch - 11ms/step Epoch 8/20 36/36 - 1s - loss: 0.1180 - accuracy: 0.9551 - val_loss: 0.1055 - val_accuracy: 0.9578 - 511ms/epoch - 14ms/step Epoch 9/20 36/36 - 1s - loss: 0.1127 - accuracy: 0.9561 - val_loss: 0.1130 - val_accuracy: 0.9538 - 635ms/epoch - 18ms/step Epoch 10/20 36/36 - 1s - loss: 0.1091 - accuracy: 0.9571 - val_loss: 0.1062 - val_accuracy: 0.9563 - 655ms/epoch - 18ms/step Epoch 11/20 36/36 - 1s - loss: 0.1009 - accuracy: 0.9590 - val_loss: 0.1248 - val_accuracy: 0.9477 - 646ms/epoch - 18ms/step Epoch 12/20 36/36 - 1s - loss: 0.0980 - accuracy: 0.9619 - val_loss: 0.1139 - val_accuracy: 0.9548 - 647ms/epoch - 18ms/step Epoch 13/20 36/36 - 1s - loss: 0.0958 - accuracy: 0.9610 - val_loss: 0.1156 - val_accuracy: 0.9522 - 622ms/epoch - 17ms/step Epoch 14/20 36/36 - 1s - loss: 0.0919 - accuracy: 0.9635 - val_loss: 0.1000 - val_accuracy: 0.9639 - 614ms/epoch - 17ms/step Epoch 15/20 36/36 - 1s - loss: 0.0866 - accuracy: 0.9650 - val_loss: 0.1085 - val_accuracy: 0.9568 - 643ms/epoch - 18ms/step Epoch 16/20 36/36 - 0s - loss: 0.0847 - accuracy: 0.9658 - val_loss: 0.1059 - val_accuracy: 0.9583 - 464ms/epoch - 13ms/step Epoch 17/20 36/36 - 0s - loss: 0.0843 - accuracy: 0.9660 - val_loss: 0.1282 - val_accuracy: 0.9461 - 382ms/epoch - 11ms/step Epoch 18/20 36/36 - 0s - loss: 0.0847 - accuracy: 0.9663 - val_loss: 0.0984 - val_accuracy: 0.9624 - 409ms/epoch - 11ms/step Epoch 19/20 36/36 - 0s - loss: 0.0823 - accuracy: 0.9676 - val_loss: 0.0892 - val_accuracy: 0.9644 - 374ms/epoch - 10ms/step Epoch 20/20 36/36 - 0s - loss: 0.0774 - accuracy: 0.9683 - val_loss: 0.0943 - val_accuracy: 0.9639 - 421ms/epoch - 12ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 64 | Dropout Rate 0.3 | FilterSize 3]: 0.9639227390289307 Epoch 1/20 36/36 - 1s - loss: 0.4230 - accuracy: 0.8305 - val_loss: 0.2711 - val_accuracy: 0.9136 - 1s/epoch - 37ms/step Epoch 2/20 36/36 - 0s - loss: 0.2237 - accuracy: 0.9213 - val_loss: 0.1889 - val_accuracy: 0.9304 - 419ms/epoch - 12ms/step Epoch 3/20 36/36 - 0s - loss: 0.1688 - accuracy: 0.9401 - val_loss: 0.1492 - val_accuracy: 0.9446 - 369ms/epoch - 10ms/step Epoch 4/20 36/36 - 0s - loss: 0.1477 - accuracy: 0.9453 - val_loss: 0.1314 - val_accuracy: 0.9502 - 415ms/epoch - 12ms/step Epoch 5/20 36/36 - 0s - loss: 0.1353 - accuracy: 0.9498 - val_loss: 0.1260 - val_accuracy: 0.9548 - 386ms/epoch - 11ms/step Epoch 6/20 36/36 - 0s - loss: 0.1212 - accuracy: 0.9545 - val_loss: 0.1251 - val_accuracy: 0.9522 - 415ms/epoch - 12ms/step Epoch 7/20 36/36 - 0s - loss: 0.1151 - accuracy: 0.9566 - val_loss: 0.1177 - val_accuracy: 0.9543 - 430ms/epoch - 12ms/step Epoch 8/20 36/36 - 0s - loss: 0.1082 - accuracy: 0.9589 - val_loss: 0.1071 - val_accuracy: 0.9588 - 378ms/epoch - 10ms/step Epoch 9/20 36/36 - 0s - loss: 0.1050 - accuracy: 0.9600 - val_loss: 0.1273 - val_accuracy: 0.9517 - 382ms/epoch - 11ms/step Epoch 10/20 36/36 - 0s - loss: 0.0984 - accuracy: 0.9607 - val_loss: 0.1066 - val_accuracy: 0.9558 - 429ms/epoch - 12ms/step Epoch 11/20 36/36 - 0s - loss: 0.0973 - accuracy: 0.9617 - val_loss: 0.1073 - val_accuracy: 0.9578 - 410ms/epoch - 11ms/step Epoch 12/20 36/36 - 0s - loss: 0.0911 - accuracy: 0.9642 - val_loss: 0.0938 - val_accuracy: 0.9649 - 386ms/epoch - 11ms/step Epoch 13/20 36/36 - 0s - loss: 0.0910 - accuracy: 0.9636 - val_loss: 0.1054 - val_accuracy: 0.9568 - 362ms/epoch - 10ms/step Epoch 14/20 36/36 - 0s - loss: 0.0866 - accuracy: 0.9658 - val_loss: 0.1161 - val_accuracy: 0.9502 - 362ms/epoch - 10ms/step Epoch 15/20 36/36 - 0s - loss: 0.0825 - accuracy: 0.9659 - val_loss: 0.1014 - val_accuracy: 0.9593 - 423ms/epoch - 12ms/step Epoch 16/20 36/36 - 0s - loss: 0.0809 - accuracy: 0.9668 - val_loss: 0.1006 - val_accuracy: 0.9599 - 381ms/epoch - 11ms/step Epoch 17/20 36/36 - 0s - loss: 0.0793 - accuracy: 0.9689 - val_loss: 0.1000 - val_accuracy: 0.9588 - 369ms/epoch - 10ms/step Epoch 18/20 36/36 - 1s - loss: 0.0771 - accuracy: 0.9701 - val_loss: 0.0969 - val_accuracy: 0.9624 - 586ms/epoch - 16ms/step Epoch 19/20 36/36 - 1s - loss: 0.0747 - accuracy: 0.9703 - val_loss: 0.0899 - val_accuracy: 0.9675 - 588ms/epoch - 16ms/step Epoch 20/20 36/36 - 1s - loss: 0.0768 - accuracy: 0.9689 - val_loss: 0.0937 - val_accuracy: 0.9619 - 557ms/epoch - 15ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 64 | Dropout Rate 0.3 | FilterSize 5]: 0.9618902206420898 Epoch 1/20 36/36 - 2s - loss: 0.4356 - accuracy: 0.8142 - val_loss: 0.2544 - val_accuracy: 0.9131 - 2s/epoch - 55ms/step Epoch 2/20 36/36 - 1s - loss: 0.2231 - accuracy: 0.9197 - val_loss: 0.1818 - val_accuracy: 0.9319 - 587ms/epoch - 16ms/step Epoch 3/20 36/36 - 1s - loss: 0.1654 - accuracy: 0.9406 - val_loss: 0.1706 - val_accuracy: 0.9339 - 528ms/epoch - 15ms/step Epoch 4/20 36/36 - 0s - loss: 0.1423 - accuracy: 0.9483 - val_loss: 0.1242 - val_accuracy: 0.9573 - 366ms/epoch - 10ms/step Epoch 5/20 36/36 - 0s - loss: 0.1331 - accuracy: 0.9513 - val_loss: 0.1414 - val_accuracy: 0.9466 - 353ms/epoch - 10ms/step Epoch 6/20 36/36 - 0s - loss: 0.1184 - accuracy: 0.9538 - val_loss: 0.1140 - val_accuracy: 0.9568 - 352ms/epoch - 10ms/step Epoch 7/20 36/36 - 0s - loss: 0.1123 - accuracy: 0.9549 - val_loss: 0.1322 - val_accuracy: 0.9487 - 403ms/epoch - 11ms/step Epoch 8/20 36/36 - 0s - loss: 0.1053 - accuracy: 0.9592 - val_loss: 0.0997 - val_accuracy: 0.9614 - 387ms/epoch - 11ms/step Epoch 9/20 36/36 - 0s - loss: 0.0985 - accuracy: 0.9607 - val_loss: 0.1052 - val_accuracy: 0.9583 - 395ms/epoch - 11ms/step Epoch 10/20 36/36 - 0s - loss: 0.0959 - accuracy: 0.9607 - val_loss: 0.1264 - val_accuracy: 0.9466 - 359ms/epoch - 10ms/step Epoch 11/20 36/36 - 0s - loss: 0.0915 - accuracy: 0.9629 - val_loss: 0.1112 - val_accuracy: 0.9527 - 359ms/epoch - 10ms/step Epoch 12/20 36/36 - 0s - loss: 0.0896 - accuracy: 0.9644 - val_loss: 0.1024 - val_accuracy: 0.9573 - 401ms/epoch - 11ms/step Epoch 13/20 36/36 - 0s - loss: 0.0832 - accuracy: 0.9670 - val_loss: 0.0986 - val_accuracy: 0.9604 - 354ms/epoch - 10ms/step Epoch 14/20 36/36 - 0s - loss: 0.0800 - accuracy: 0.9684 - val_loss: 0.0970 - val_accuracy: 0.9599 - 386ms/epoch - 11ms/step Epoch 15/20 36/36 - 0s - loss: 0.0791 - accuracy: 0.9678 - val_loss: 0.0973 - val_accuracy: 0.9624 - 361ms/epoch - 10ms/step Epoch 16/20 36/36 - 0s - loss: 0.0768 - accuracy: 0.9689 - val_loss: 0.0920 - val_accuracy: 0.9649 - 356ms/epoch - 10ms/step Epoch 17/20 36/36 - 0s - loss: 0.0727 - accuracy: 0.9710 - val_loss: 0.0943 - val_accuracy: 0.9629 - 399ms/epoch - 11ms/step Epoch 18/20 36/36 - 0s - loss: 0.0734 - accuracy: 0.9703 - val_loss: 0.1101 - val_accuracy: 0.9522 - 378ms/epoch - 10ms/step Epoch 19/20 36/36 - 0s - loss: 0.0737 - accuracy: 0.9698 - val_loss: 0.0850 - val_accuracy: 0.9680 - 365ms/epoch - 10ms/step Epoch 20/20 36/36 - 0s - loss: 0.0684 - accuracy: 0.9718 - val_loss: 0.0978 - val_accuracy: 0.9604 - 365ms/epoch - 10ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 64 | Dropout Rate 0.3 | FilterSize 7]: 0.9603658318519592 Epoch 1/20 36/36 - 1s - loss: 0.4035 - accuracy: 0.8390 - val_loss: 0.2402 - val_accuracy: 0.9151 - 1s/epoch - 38ms/step Epoch 2/20 36/36 - 0s - loss: 0.2206 - accuracy: 0.9230 - val_loss: 0.1940 - val_accuracy: 0.9355 - 430ms/epoch - 12ms/step Epoch 3/20 36/36 - 0s - loss: 0.1740 - accuracy: 0.9410 - val_loss: 0.1379 - val_accuracy: 0.9507 - 433ms/epoch - 12ms/step Epoch 4/20 36/36 - 0s - loss: 0.1541 - accuracy: 0.9454 - val_loss: 0.1395 - val_accuracy: 0.9472 - 435ms/epoch - 12ms/step Epoch 5/20 36/36 - 0s - loss: 0.1446 - accuracy: 0.9478 - val_loss: 0.1381 - val_accuracy: 0.9456 - 394ms/epoch - 11ms/step Epoch 6/20 36/36 - 1s - loss: 0.1317 - accuracy: 0.9528 - val_loss: 0.1174 - val_accuracy: 0.9553 - 510ms/epoch - 14ms/step Epoch 7/20 36/36 - 1s - loss: 0.1271 - accuracy: 0.9524 - val_loss: 0.1234 - val_accuracy: 0.9533 - 626ms/epoch - 17ms/step Epoch 8/20 36/36 - 1s - loss: 0.1213 - accuracy: 0.9541 - val_loss: 0.1129 - val_accuracy: 0.9573 - 675ms/epoch - 19ms/step Epoch 9/20 36/36 - 1s - loss: 0.1155 - accuracy: 0.9547 - val_loss: 0.1171 - val_accuracy: 0.9578 - 845ms/epoch - 23ms/step Epoch 10/20 36/36 - 1s - loss: 0.1128 - accuracy: 0.9566 - val_loss: 0.1323 - val_accuracy: 0.9472 - 775ms/epoch - 22ms/step Epoch 11/20 36/36 - 1s - loss: 0.1087 - accuracy: 0.9570 - val_loss: 0.1089 - val_accuracy: 0.9553 - 771ms/epoch - 21ms/step Epoch 12/20 36/36 - 1s - loss: 0.1037 - accuracy: 0.9584 - val_loss: 0.1080 - val_accuracy: 0.9573 - 764ms/epoch - 21ms/step Epoch 13/20 36/36 - 1s - loss: 0.1029 - accuracy: 0.9591 - val_loss: 0.1113 - val_accuracy: 0.9543 - 756ms/epoch - 21ms/step Epoch 14/20 36/36 - 1s - loss: 0.0995 - accuracy: 0.9601 - val_loss: 0.1024 - val_accuracy: 0.9604 - 743ms/epoch - 21ms/step Epoch 15/20 36/36 - 1s - loss: 0.0953 - accuracy: 0.9621 - val_loss: 0.1135 - val_accuracy: 0.9553 - 766ms/epoch - 21ms/step Epoch 16/20 36/36 - 1s - loss: 0.0949 - accuracy: 0.9615 - val_loss: 0.0970 - val_accuracy: 0.9624 - 723ms/epoch - 20ms/step Epoch 17/20 36/36 - 1s - loss: 0.0908 - accuracy: 0.9626 - val_loss: 0.1041 - val_accuracy: 0.9588 - 715ms/epoch - 20ms/step Epoch 18/20 36/36 - 1s - loss: 0.0891 - accuracy: 0.9634 - val_loss: 0.0965 - val_accuracy: 0.9629 - 612ms/epoch - 17ms/step Epoch 19/20 36/36 - 0s - loss: 0.0866 - accuracy: 0.9649 - val_loss: 0.0986 - val_accuracy: 0.9599 - 403ms/epoch - 11ms/step Epoch 20/20 36/36 - 0s - loss: 0.0868 - accuracy: 0.9640 - val_loss: 0.0926 - val_accuracy: 0.9654 - 406ms/epoch - 11ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 64 | Dropout Rate 0.5 | FilterSize 3]: 0.9654471278190613 Epoch 1/20 36/36 - 1s - loss: 0.4323 - accuracy: 0.8211 - val_loss: 0.2589 - val_accuracy: 0.9162 - 1s/epoch - 37ms/step Epoch 2/20 36/36 - 0s - loss: 0.2335 - accuracy: 0.9160 - val_loss: 0.1955 - val_accuracy: 0.9304 - 401ms/epoch - 11ms/step Epoch 3/20 36/36 - 0s - loss: 0.1761 - accuracy: 0.9370 - val_loss: 0.1518 - val_accuracy: 0.9451 - 370ms/epoch - 10ms/step Epoch 4/20 36/36 - 0s - loss: 0.1498 - accuracy: 0.9453 - val_loss: 0.1239 - val_accuracy: 0.9578 - 381ms/epoch - 11ms/step Epoch 5/20 36/36 - 0s - loss: 0.1398 - accuracy: 0.9497 - val_loss: 0.1282 - val_accuracy: 0.9548 - 363ms/epoch - 10ms/step Epoch 6/20 36/36 - 0s - loss: 0.1268 - accuracy: 0.9548 - val_loss: 0.1472 - val_accuracy: 0.9451 - 386ms/epoch - 11ms/step Epoch 7/20 36/36 - 0s - loss: 0.1231 - accuracy: 0.9552 - val_loss: 0.1160 - val_accuracy: 0.9573 - 412ms/epoch - 11ms/step Epoch 8/20 36/36 - 0s - loss: 0.1133 - accuracy: 0.9568 - val_loss: 0.1656 - val_accuracy: 0.9360 - 371ms/epoch - 10ms/step Epoch 9/20 36/36 - 0s - loss: 0.1079 - accuracy: 0.9568 - val_loss: 0.1038 - val_accuracy: 0.9604 - 424ms/epoch - 12ms/step Epoch 10/20 36/36 - 0s - loss: 0.1040 - accuracy: 0.9599 - val_loss: 0.1249 - val_accuracy: 0.9502 - 407ms/epoch - 11ms/step Epoch 11/20 36/36 - 0s - loss: 0.0999 - accuracy: 0.9608 - val_loss: 0.1253 - val_accuracy: 0.9482 - 373ms/epoch - 10ms/step Epoch 12/20 36/36 - 0s - loss: 0.0977 - accuracy: 0.9607 - val_loss: 0.0963 - val_accuracy: 0.9614 - 418ms/epoch - 12ms/step Epoch 13/20 36/36 - 0s - loss: 0.0935 - accuracy: 0.9632 - val_loss: 0.1080 - val_accuracy: 0.9563 - 357ms/epoch - 10ms/step Epoch 14/20 36/36 - 0s - loss: 0.0919 - accuracy: 0.9616 - val_loss: 0.1169 - val_accuracy: 0.9482 - 370ms/epoch - 10ms/step Epoch 15/20 36/36 - 0s - loss: 0.0877 - accuracy: 0.9657 - val_loss: 0.1028 - val_accuracy: 0.9604 - 401ms/epoch - 11ms/step Epoch 16/20 36/36 - 0s - loss: 0.0856 - accuracy: 0.9659 - val_loss: 0.1194 - val_accuracy: 0.9507 - 378ms/epoch - 10ms/step Epoch 17/20 36/36 - 0s - loss: 0.0868 - accuracy: 0.9652 - val_loss: 0.0992 - val_accuracy: 0.9609 - 425ms/epoch - 12ms/step Epoch 18/20 36/36 - 0s - loss: 0.0835 - accuracy: 0.9670 - val_loss: 0.0910 - val_accuracy: 0.9649 - 361ms/epoch - 10ms/step Epoch 19/20 36/36 - 1s - loss: 0.0815 - accuracy: 0.9681 - val_loss: 0.0874 - val_accuracy: 0.9660 - 587ms/epoch - 16ms/step Epoch 20/20 36/36 - 1s - loss: 0.0764 - accuracy: 0.9692 - val_loss: 0.0924 - val_accuracy: 0.9629 - 602ms/epoch - 17ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 64 | Dropout Rate 0.5 | FilterSize 5]: 0.9629064798355103 Epoch 1/20 36/36 - 2s - loss: 0.4482 - accuracy: 0.8148 - val_loss: 0.3166 - val_accuracy: 0.8674 - 2s/epoch - 56ms/step Epoch 2/20 36/36 - 1s - loss: 0.2495 - accuracy: 0.9110 - val_loss: 0.1915 - val_accuracy: 0.9375 - 590ms/epoch - 16ms/step Epoch 3/20 36/36 - 1s - loss: 0.1800 - accuracy: 0.9382 - val_loss: 0.1526 - val_accuracy: 0.9441 - 589ms/epoch - 16ms/step Epoch 4/20 36/36 - 0s - loss: 0.1516 - accuracy: 0.9450 - val_loss: 0.1416 - val_accuracy: 0.9527 - 423ms/epoch - 12ms/step Epoch 5/20 36/36 - 0s - loss: 0.1322 - accuracy: 0.9521 - val_loss: 0.1267 - val_accuracy: 0.9558 - 388ms/epoch - 11ms/step Epoch 6/20 36/36 - 0s - loss: 0.1246 - accuracy: 0.9553 - val_loss: 0.1292 - val_accuracy: 0.9533 - 397ms/epoch - 11ms/step Epoch 7/20 36/36 - 0s - loss: 0.1147 - accuracy: 0.9572 - val_loss: 0.1080 - val_accuracy: 0.9609 - 353ms/epoch - 10ms/step Epoch 8/20 36/36 - 0s - loss: 0.1078 - accuracy: 0.9588 - val_loss: 0.1052 - val_accuracy: 0.9609 - 393ms/epoch - 11ms/step Epoch 9/20 36/36 - 0s - loss: 0.1023 - accuracy: 0.9604 - val_loss: 0.1124 - val_accuracy: 0.9563 - 399ms/epoch - 11ms/step Epoch 10/20 36/36 - 0s - loss: 0.0983 - accuracy: 0.9611 - val_loss: 0.1256 - val_accuracy: 0.9517 - 360ms/epoch - 10ms/step Epoch 11/20 36/36 - 0s - loss: 0.0972 - accuracy: 0.9614 - val_loss: 0.1023 - val_accuracy: 0.9604 - 349ms/epoch - 10ms/step Epoch 12/20 36/36 - 0s - loss: 0.0918 - accuracy: 0.9616 - val_loss: 0.1028 - val_accuracy: 0.9588 - 359ms/epoch - 10ms/step Epoch 13/20 36/36 - 0s - loss: 0.0889 - accuracy: 0.9632 - val_loss: 0.1039 - val_accuracy: 0.9563 - 393ms/epoch - 11ms/step Epoch 14/20 36/36 - 0s - loss: 0.0857 - accuracy: 0.9666 - val_loss: 0.1052 - val_accuracy: 0.9573 - 382ms/epoch - 11ms/step Epoch 15/20 36/36 - 0s - loss: 0.0837 - accuracy: 0.9659 - val_loss: 0.1078 - val_accuracy: 0.9553 - 351ms/epoch - 10ms/step Epoch 16/20 36/36 - 0s - loss: 0.0835 - accuracy: 0.9667 - val_loss: 0.1044 - val_accuracy: 0.9558 - 396ms/epoch - 11ms/step Epoch 17/20 36/36 - 0s - loss: 0.0773 - accuracy: 0.9690 - val_loss: 0.0982 - val_accuracy: 0.9639 - 370ms/epoch - 10ms/step Epoch 18/20 36/36 - 0s - loss: 0.0767 - accuracy: 0.9695 - val_loss: 0.1034 - val_accuracy: 0.9583 - 354ms/epoch - 10ms/step Epoch 19/20 36/36 - 0s - loss: 0.0753 - accuracy: 0.9683 - val_loss: 0.0990 - val_accuracy: 0.9614 - 352ms/epoch - 10ms/step Epoch 20/20 36/36 - 0s - loss: 0.0764 - accuracy: 0.9691 - val_loss: 0.0973 - val_accuracy: 0.9593 - 377ms/epoch - 10ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 64 | Dropout Rate 0.5 | FilterSize 7]: 0.9593495726585388 Epoch 1/20 36/36 - 3s - loss: 0.4090 - accuracy: 0.8370 - val_loss: 0.2542 - val_accuracy: 0.9172 - 3s/epoch - 70ms/step Epoch 2/20 36/36 - 1s - loss: 0.2255 - accuracy: 0.9196 - val_loss: 0.1696 - val_accuracy: 0.9436 - 641ms/epoch - 18ms/step Epoch 3/20 36/36 - 1s - loss: 0.1764 - accuracy: 0.9388 - val_loss: 0.1561 - val_accuracy: 0.9461 - 643ms/epoch - 18ms/step Epoch 4/20 36/36 - 1s - loss: 0.1601 - accuracy: 0.9428 - val_loss: 0.1326 - val_accuracy: 0.9527 - 623ms/epoch - 17ms/step Epoch 5/20 36/36 - 1s - loss: 0.1508 - accuracy: 0.9456 - val_loss: 0.1364 - val_accuracy: 0.9492 - 634ms/epoch - 18ms/step Epoch 6/20 36/36 - 1s - loss: 0.1420 - accuracy: 0.9479 - val_loss: 0.1287 - val_accuracy: 0.9507 - 620ms/epoch - 17ms/step Epoch 7/20 36/36 - 1s - loss: 0.1359 - accuracy: 0.9480 - val_loss: 0.1214 - val_accuracy: 0.9543 - 594ms/epoch - 16ms/step Epoch 8/20 36/36 - 1s - loss: 0.1290 - accuracy: 0.9534 - val_loss: 0.1222 - val_accuracy: 0.9512 - 554ms/epoch - 15ms/step Epoch 9/20 36/36 - 0s - loss: 0.1239 - accuracy: 0.9531 - val_loss: 0.1365 - val_accuracy: 0.9477 - 439ms/epoch - 12ms/step Epoch 10/20 36/36 - 0s - loss: 0.1202 - accuracy: 0.9520 - val_loss: 0.1148 - val_accuracy: 0.9538 - 422ms/epoch - 12ms/step Epoch 11/20 36/36 - 0s - loss: 0.1143 - accuracy: 0.9549 - val_loss: 0.1115 - val_accuracy: 0.9558 - 427ms/epoch - 12ms/step Epoch 12/20 36/36 - 0s - loss: 0.1166 - accuracy: 0.9539 - val_loss: 0.1168 - val_accuracy: 0.9533 - 442ms/epoch - 12ms/step Epoch 13/20 36/36 - 0s - loss: 0.1099 - accuracy: 0.9562 - val_loss: 0.1206 - val_accuracy: 0.9512 - 423ms/epoch - 12ms/step Epoch 14/20 36/36 - 0s - loss: 0.1084 - accuracy: 0.9581 - val_loss: 0.1023 - val_accuracy: 0.9604 - 445ms/epoch - 12ms/step Epoch 15/20 36/36 - 0s - loss: 0.1030 - accuracy: 0.9592 - val_loss: 0.1073 - val_accuracy: 0.9568 - 430ms/epoch - 12ms/step Epoch 16/20 36/36 - 0s - loss: 0.1026 - accuracy: 0.9594 - val_loss: 0.1204 - val_accuracy: 0.9497 - 398ms/epoch - 11ms/step Epoch 17/20 36/36 - 0s - loss: 0.1028 - accuracy: 0.9593 - val_loss: 0.1069 - val_accuracy: 0.9543 - 423ms/epoch - 12ms/step Epoch 18/20 36/36 - 0s - loss: 0.0978 - accuracy: 0.9600 - val_loss: 0.1106 - val_accuracy: 0.9563 - 401ms/epoch - 11ms/step Epoch 19/20 36/36 - 0s - loss: 0.0961 - accuracy: 0.9600 - val_loss: 0.1206 - val_accuracy: 0.9497 - 409ms/epoch - 11ms/step Epoch 20/20 36/36 - 0s - loss: 0.0974 - accuracy: 0.9606 - val_loss: 0.1013 - val_accuracy: 0.9588 - 392ms/epoch - 11ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 64 | Dropout Rate 0.7 | FilterSize 3]: 0.9588414430618286 Epoch 1/20 36/36 - 1s - loss: 0.4456 - accuracy: 0.8052 - val_loss: 0.2915 - val_accuracy: 0.8953 - 1s/epoch - 38ms/step Epoch 2/20 36/36 - 0s - loss: 0.2564 - accuracy: 0.9090 - val_loss: 0.1868 - val_accuracy: 0.9355 - 415ms/epoch - 12ms/step Epoch 3/20 36/36 - 0s - loss: 0.1931 - accuracy: 0.9313 - val_loss: 0.1617 - val_accuracy: 0.9426 - 413ms/epoch - 11ms/step Epoch 4/20 36/36 - 0s - loss: 0.1640 - accuracy: 0.9415 - val_loss: 0.1607 - val_accuracy: 0.9390 - 385ms/epoch - 11ms/step Epoch 5/20 36/36 - 0s - loss: 0.1483 - accuracy: 0.9456 - val_loss: 0.1230 - val_accuracy: 0.9553 - 406ms/epoch - 11ms/step Epoch 6/20 36/36 - 0s - loss: 0.1365 - accuracy: 0.9498 - val_loss: 0.1217 - val_accuracy: 0.9553 - 385ms/epoch - 11ms/step Epoch 7/20 36/36 - 0s - loss: 0.1299 - accuracy: 0.9516 - val_loss: 0.1400 - val_accuracy: 0.9487 - 363ms/epoch - 10ms/step Epoch 8/20 36/36 - 0s - loss: 0.1224 - accuracy: 0.9524 - val_loss: 0.1321 - val_accuracy: 0.9512 - 380ms/epoch - 11ms/step Epoch 9/20 36/36 - 0s - loss: 0.1159 - accuracy: 0.9549 - val_loss: 0.1131 - val_accuracy: 0.9563 - 431ms/epoch - 12ms/step Epoch 10/20 36/36 - 1s - loss: 0.1146 - accuracy: 0.9559 - val_loss: 0.1235 - val_accuracy: 0.9522 - 597ms/epoch - 17ms/step Epoch 11/20 36/36 - 1s - loss: 0.1119 - accuracy: 0.9563 - val_loss: 0.1089 - val_accuracy: 0.9609 - 593ms/epoch - 16ms/step Epoch 12/20 36/36 - 1s - loss: 0.1083 - accuracy: 0.9603 - val_loss: 0.1110 - val_accuracy: 0.9548 - 627ms/epoch - 17ms/step Epoch 13/20 36/36 - 1s - loss: 0.1040 - accuracy: 0.9593 - val_loss: 0.1071 - val_accuracy: 0.9558 - 586ms/epoch - 16ms/step Epoch 14/20 36/36 - 1s - loss: 0.1004 - accuracy: 0.9604 - val_loss: 0.1061 - val_accuracy: 0.9578 - 602ms/epoch - 17ms/step Epoch 15/20 36/36 - 1s - loss: 0.0962 - accuracy: 0.9611 - val_loss: 0.1162 - val_accuracy: 0.9502 - 604ms/epoch - 17ms/step Epoch 16/20 36/36 - 1s - loss: 0.0962 - accuracy: 0.9621 - val_loss: 0.1062 - val_accuracy: 0.9578 - 598ms/epoch - 17ms/step Epoch 17/20 36/36 - 1s - loss: 0.0954 - accuracy: 0.9603 - val_loss: 0.1044 - val_accuracy: 0.9583 - 621ms/epoch - 17ms/step Epoch 18/20 36/36 - 0s - loss: 0.0918 - accuracy: 0.9621 - val_loss: 0.0921 - val_accuracy: 0.9665 - 440ms/epoch - 12ms/step Epoch 19/20 36/36 - 0s - loss: 0.0944 - accuracy: 0.9631 - val_loss: 0.1023 - val_accuracy: 0.9619 - 369ms/epoch - 10ms/step Epoch 20/20 36/36 - 0s - loss: 0.0899 - accuracy: 0.9628 - val_loss: 0.0984 - val_accuracy: 0.9624 - 374ms/epoch - 10ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 64 | Dropout Rate 0.7 | FilterSize 5]: 0.9623983502388 Epoch 1/20 36/36 - 1s - loss: 0.4698 - accuracy: 0.7940 - val_loss: 0.3291 - val_accuracy: 0.8725 - 1s/epoch - 37ms/step Epoch 2/20 36/36 - 0s - loss: 0.2639 - accuracy: 0.9042 - val_loss: 0.1994 - val_accuracy: 0.9304 - 382ms/epoch - 11ms/step Epoch 3/20 36/36 - 0s - loss: 0.1927 - accuracy: 0.9317 - val_loss: 0.1735 - val_accuracy: 0.9334 - 359ms/epoch - 10ms/step Epoch 4/20 36/36 - 0s - loss: 0.1664 - accuracy: 0.9400 - val_loss: 0.1428 - val_accuracy: 0.9461 - 345ms/epoch - 10ms/step Epoch 5/20 36/36 - 0s - loss: 0.1475 - accuracy: 0.9465 - val_loss: 0.1467 - val_accuracy: 0.9436 - 352ms/epoch - 10ms/step Epoch 6/20 36/36 - 0s - loss: 0.1339 - accuracy: 0.9501 - val_loss: 0.1417 - val_accuracy: 0.9477 - 358ms/epoch - 10ms/step Epoch 7/20 36/36 - 0s - loss: 0.1262 - accuracy: 0.9544 - val_loss: 0.1369 - val_accuracy: 0.9487 - 395ms/epoch - 11ms/step Epoch 8/20 36/36 - 0s - loss: 0.1189 - accuracy: 0.9551 - val_loss: 0.1236 - val_accuracy: 0.9527 - 377ms/epoch - 10ms/step Epoch 9/20 36/36 - 0s - loss: 0.1130 - accuracy: 0.9567 - val_loss: 0.1336 - val_accuracy: 0.9466 - 397ms/epoch - 11ms/step Epoch 10/20 36/36 - 0s - loss: 0.1096 - accuracy: 0.9592 - val_loss: 0.1108 - val_accuracy: 0.9553 - 392ms/epoch - 11ms/step Epoch 11/20 36/36 - 0s - loss: 0.1075 - accuracy: 0.9578 - val_loss: 0.1228 - val_accuracy: 0.9517 - 362ms/epoch - 10ms/step Epoch 12/20 36/36 - 0s - loss: 0.1056 - accuracy: 0.9594 - val_loss: 0.1136 - val_accuracy: 0.9578 - 356ms/epoch - 10ms/step Epoch 13/20 36/36 - 0s - loss: 0.1001 - accuracy: 0.9620 - val_loss: 0.1021 - val_accuracy: 0.9609 - 352ms/epoch - 10ms/step Epoch 14/20 36/36 - 0s - loss: 0.0980 - accuracy: 0.9614 - val_loss: 0.1014 - val_accuracy: 0.9588 - 394ms/epoch - 11ms/step Epoch 15/20 36/36 - 0s - loss: 0.0950 - accuracy: 0.9602 - val_loss: 0.1066 - val_accuracy: 0.9599 - 383ms/epoch - 11ms/step Epoch 16/20 36/36 - 0s - loss: 0.0945 - accuracy: 0.9622 - val_loss: 0.1001 - val_accuracy: 0.9588 - 387ms/epoch - 11ms/step Epoch 17/20 36/36 - 0s - loss: 0.0888 - accuracy: 0.9650 - val_loss: 0.1102 - val_accuracy: 0.9522 - 368ms/epoch - 10ms/step Epoch 18/20 36/36 - 0s - loss: 0.0907 - accuracy: 0.9632 - val_loss: 0.0996 - val_accuracy: 0.9619 - 395ms/epoch - 11ms/step Epoch 19/20 36/36 - 0s - loss: 0.0861 - accuracy: 0.9669 - val_loss: 0.1248 - val_accuracy: 0.9487 - 408ms/epoch - 11ms/step Epoch 20/20 36/36 - 0s - loss: 0.0875 - accuracy: 0.9637 - val_loss: 0.1069 - val_accuracy: 0.9558 - 347ms/epoch - 10ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 64 | Dropout Rate 0.7 | FilterSize 7]: 0.9557926654815674 Epoch 1/20 36/36 - 2s - loss: 0.3611 - accuracy: 0.8690 - val_loss: 0.1860 - val_accuracy: 0.9273 - 2s/epoch - 65ms/step Epoch 2/20 36/36 - 1s - loss: 0.1938 - accuracy: 0.9335 - val_loss: 0.1615 - val_accuracy: 0.9395 - 994ms/epoch - 28ms/step Epoch 3/20 36/36 - 1s - loss: 0.1590 - accuracy: 0.9423 - val_loss: 0.1520 - val_accuracy: 0.9431 - 996ms/epoch - 28ms/step Epoch 4/20 36/36 - 1s - loss: 0.1450 - accuracy: 0.9466 - val_loss: 0.1378 - val_accuracy: 0.9466 - 900ms/epoch - 25ms/step Epoch 5/20 36/36 - 1s - loss: 0.1325 - accuracy: 0.9491 - val_loss: 0.1142 - val_accuracy: 0.9563 - 604ms/epoch - 17ms/step Epoch 6/20 36/36 - 1s - loss: 0.1212 - accuracy: 0.9526 - val_loss: 0.1342 - val_accuracy: 0.9512 - 586ms/epoch - 16ms/step Epoch 7/20 36/36 - 1s - loss: 0.1130 - accuracy: 0.9567 - val_loss: 0.1239 - val_accuracy: 0.9522 - 597ms/epoch - 17ms/step Epoch 8/20 36/36 - 1s - loss: 0.1104 - accuracy: 0.9573 - val_loss: 0.1226 - val_accuracy: 0.9558 - 581ms/epoch - 16ms/step Epoch 9/20 36/36 - 1s - loss: 0.1068 - accuracy: 0.9566 - val_loss: 0.1209 - val_accuracy: 0.9533 - 612ms/epoch - 17ms/step Epoch 10/20 36/36 - 1s - loss: 0.1015 - accuracy: 0.9597 - val_loss: 0.1131 - val_accuracy: 0.9558 - 606ms/epoch - 17ms/step Epoch 11/20 36/36 - 1s - loss: 0.1002 - accuracy: 0.9601 - val_loss: 0.0996 - val_accuracy: 0.9614 - 588ms/epoch - 16ms/step Epoch 12/20 36/36 - 1s - loss: 0.0965 - accuracy: 0.9604 - val_loss: 0.1063 - val_accuracy: 0.9588 - 600ms/epoch - 17ms/step Epoch 13/20 36/36 - 1s - loss: 0.0920 - accuracy: 0.9616 - val_loss: 0.1075 - val_accuracy: 0.9583 - 617ms/epoch - 17ms/step Epoch 14/20 36/36 - 1s - loss: 0.0873 - accuracy: 0.9639 - val_loss: 0.1011 - val_accuracy: 0.9614 - 601ms/epoch - 17ms/step Epoch 15/20 36/36 - 1s - loss: 0.0863 - accuracy: 0.9633 - val_loss: 0.1386 - val_accuracy: 0.9431 - 605ms/epoch - 17ms/step Epoch 16/20 36/36 - 1s - loss: 0.0867 - accuracy: 0.9651 - val_loss: 0.1125 - val_accuracy: 0.9558 - 591ms/epoch - 16ms/step Epoch 17/20 36/36 - 1s - loss: 0.0814 - accuracy: 0.9676 - val_loss: 0.0977 - val_accuracy: 0.9624 - 610ms/epoch - 17ms/step Epoch 18/20 36/36 - 1s - loss: 0.0787 - accuracy: 0.9672 - val_loss: 0.0970 - val_accuracy: 0.9654 - 606ms/epoch - 17ms/step Epoch 19/20 36/36 - 1s - loss: 0.0785 - accuracy: 0.9690 - val_loss: 0.0914 - val_accuracy: 0.9685 - 600ms/epoch - 17ms/step Epoch 20/20 36/36 - 1s - loss: 0.0756 - accuracy: 0.9701 - val_loss: 0.0836 - val_accuracy: 0.9715 - 593ms/epoch - 16ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 128 | Dropout Rate 0.3 | FilterSize 3]: 0.9715447425842285 Epoch 1/20 36/36 - 2s - loss: 0.3831 - accuracy: 0.8471 - val_loss: 0.2429 - val_accuracy: 0.9197 - 2s/epoch - 66ms/step Epoch 2/20 36/36 - 1s - loss: 0.1939 - accuracy: 0.9340 - val_loss: 0.1755 - val_accuracy: 0.9355 - 932ms/epoch - 26ms/step Epoch 3/20 36/36 - 1s - loss: 0.1506 - accuracy: 0.9467 - val_loss: 0.1510 - val_accuracy: 0.9482 - 948ms/epoch - 26ms/step Epoch 4/20 36/36 - 1s - loss: 0.1331 - accuracy: 0.9528 - val_loss: 0.1284 - val_accuracy: 0.9543 - 757ms/epoch - 21ms/step Epoch 5/20 36/36 - 1s - loss: 0.1170 - accuracy: 0.9549 - val_loss: 0.1221 - val_accuracy: 0.9502 - 609ms/epoch - 17ms/step Epoch 6/20 36/36 - 1s - loss: 0.1119 - accuracy: 0.9556 - val_loss: 0.1206 - val_accuracy: 0.9548 - 594ms/epoch - 17ms/step Epoch 7/20 36/36 - 1s - loss: 0.1062 - accuracy: 0.9574 - val_loss: 0.1259 - val_accuracy: 0.9482 - 573ms/epoch - 16ms/step Epoch 8/20 36/36 - 1s - loss: 0.1012 - accuracy: 0.9599 - val_loss: 0.1215 - val_accuracy: 0.9507 - 611ms/epoch - 17ms/step Epoch 9/20 36/36 - 1s - loss: 0.0928 - accuracy: 0.9639 - val_loss: 0.0941 - val_accuracy: 0.9670 - 598ms/epoch - 17ms/step Epoch 10/20 36/36 - 1s - loss: 0.0897 - accuracy: 0.9653 - val_loss: 0.1072 - val_accuracy: 0.9568 - 583ms/epoch - 16ms/step Epoch 11/20 36/36 - 1s - loss: 0.0846 - accuracy: 0.9663 - val_loss: 0.1067 - val_accuracy: 0.9583 - 573ms/epoch - 16ms/step Epoch 12/20 36/36 - 1s - loss: 0.0823 - accuracy: 0.9667 - val_loss: 0.1097 - val_accuracy: 0.9553 - 586ms/epoch - 16ms/step Epoch 13/20 36/36 - 1s - loss: 0.0846 - accuracy: 0.9660 - val_loss: 0.0903 - val_accuracy: 0.9660 - 579ms/epoch - 16ms/step Epoch 14/20 36/36 - 1s - loss: 0.0807 - accuracy: 0.9687 - val_loss: 0.0889 - val_accuracy: 0.9685 - 602ms/epoch - 17ms/step Epoch 15/20 36/36 - 1s - loss: 0.0755 - accuracy: 0.9697 - val_loss: 0.0948 - val_accuracy: 0.9629 - 594ms/epoch - 16ms/step Epoch 16/20 36/36 - 1s - loss: 0.0748 - accuracy: 0.9701 - val_loss: 0.0862 - val_accuracy: 0.9700 - 577ms/epoch - 16ms/step Epoch 17/20 36/36 - 1s - loss: 0.0722 - accuracy: 0.9706 - val_loss: 0.0844 - val_accuracy: 0.9685 - 577ms/epoch - 16ms/step Epoch 18/20 36/36 - 1s - loss: 0.0692 - accuracy: 0.9727 - val_loss: 0.1174 - val_accuracy: 0.9477 - 676ms/epoch - 19ms/step Epoch 19/20 36/36 - 1s - loss: 0.0685 - accuracy: 0.9734 - val_loss: 0.0944 - val_accuracy: 0.9634 - 609ms/epoch - 17ms/step Epoch 20/20 36/36 - 1s - loss: 0.0707 - accuracy: 0.9713 - val_loss: 0.0960 - val_accuracy: 0.9639 - 595ms/epoch - 17ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 128 | Dropout Rate 0.3 | FilterSize 5]: 0.9639227390289307 Epoch 1/20 36/36 - 1s - loss: 0.3976 - accuracy: 0.8358 - val_loss: 0.2869 - val_accuracy: 0.8943 - 1s/epoch - 41ms/step Epoch 2/20 36/36 - 1s - loss: 0.2007 - accuracy: 0.9298 - val_loss: 0.1636 - val_accuracy: 0.9400 - 533ms/epoch - 15ms/step Epoch 3/20 36/36 - 1s - loss: 0.1534 - accuracy: 0.9443 - val_loss: 0.1386 - val_accuracy: 0.9472 - 541ms/epoch - 15ms/step Epoch 4/20 36/36 - 1s - loss: 0.1343 - accuracy: 0.9511 - val_loss: 0.1687 - val_accuracy: 0.9319 - 521ms/epoch - 14ms/step Epoch 5/20 36/36 - 1s - loss: 0.1201 - accuracy: 0.9544 - val_loss: 0.1398 - val_accuracy: 0.9466 - 518ms/epoch - 14ms/step Epoch 6/20 36/36 - 1s - loss: 0.1115 - accuracy: 0.9571 - val_loss: 0.1385 - val_accuracy: 0.9451 - 516ms/epoch - 14ms/step Epoch 7/20 36/36 - 1s - loss: 0.1010 - accuracy: 0.9611 - val_loss: 0.1037 - val_accuracy: 0.9614 - 518ms/epoch - 14ms/step Epoch 8/20 36/36 - 1s - loss: 0.0956 - accuracy: 0.9618 - val_loss: 0.1133 - val_accuracy: 0.9548 - 533ms/epoch - 15ms/step Epoch 9/20 36/36 - 1s - loss: 0.0907 - accuracy: 0.9646 - val_loss: 0.1047 - val_accuracy: 0.9604 - 508ms/epoch - 14ms/step Epoch 10/20 36/36 - 1s - loss: 0.0884 - accuracy: 0.9647 - val_loss: 0.1347 - val_accuracy: 0.9446 - 859ms/epoch - 24ms/step Epoch 11/20 36/36 - 1s - loss: 0.0856 - accuracy: 0.9647 - val_loss: 0.1029 - val_accuracy: 0.9593 - 869ms/epoch - 24ms/step Epoch 12/20 36/36 - 1s - loss: 0.0819 - accuracy: 0.9668 - val_loss: 0.1030 - val_accuracy: 0.9583 - 849ms/epoch - 24ms/step Epoch 13/20 36/36 - 1s - loss: 0.0786 - accuracy: 0.9675 - val_loss: 0.0901 - val_accuracy: 0.9670 - 852ms/epoch - 24ms/step Epoch 14/20 36/36 - 1s - loss: 0.0782 - accuracy: 0.9678 - val_loss: 0.0895 - val_accuracy: 0.9665 - 864ms/epoch - 24ms/step Epoch 15/20 36/36 - 1s - loss: 0.0724 - accuracy: 0.9707 - val_loss: 0.0940 - val_accuracy: 0.9629 - 724ms/epoch - 20ms/step Epoch 16/20 36/36 - 1s - loss: 0.0729 - accuracy: 0.9698 - val_loss: 0.0887 - val_accuracy: 0.9690 - 524ms/epoch - 15ms/step Epoch 17/20 36/36 - 1s - loss: 0.0683 - accuracy: 0.9722 - val_loss: 0.1040 - val_accuracy: 0.9553 - 515ms/epoch - 14ms/step Epoch 18/20 36/36 - 1s - loss: 0.0668 - accuracy: 0.9735 - val_loss: 0.0963 - val_accuracy: 0.9604 - 541ms/epoch - 15ms/step Epoch 19/20 36/36 - 1s - loss: 0.0651 - accuracy: 0.9743 - val_loss: 0.0902 - val_accuracy: 0.9660 - 528ms/epoch - 15ms/step Epoch 20/20 36/36 - 1s - loss: 0.0640 - accuracy: 0.9742 - val_loss: 0.0941 - val_accuracy: 0.9660 - 541ms/epoch - 15ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 128 | Dropout Rate 0.3 | FilterSize 7]: 0.9659552574157715 Epoch 1/20 36/36 - 2s - loss: 0.3448 - accuracy: 0.8673 - val_loss: 0.1924 - val_accuracy: 0.9309 - 2s/epoch - 43ms/step Epoch 2/20 36/36 - 1s - loss: 0.1861 - accuracy: 0.9337 - val_loss: 0.1748 - val_accuracy: 0.9360 - 640ms/epoch - 18ms/step Epoch 3/20 36/36 - 1s - loss: 0.1545 - accuracy: 0.9454 - val_loss: 0.1588 - val_accuracy: 0.9416 - 618ms/epoch - 17ms/step Epoch 4/20 36/36 - 1s - loss: 0.1446 - accuracy: 0.9495 - val_loss: 0.1281 - val_accuracy: 0.9553 - 590ms/epoch - 16ms/step Epoch 5/20 36/36 - 1s - loss: 0.1336 - accuracy: 0.9509 - val_loss: 0.1233 - val_accuracy: 0.9573 - 636ms/epoch - 18ms/step Epoch 6/20 36/36 - 1s - loss: 0.1249 - accuracy: 0.9538 - val_loss: 0.1233 - val_accuracy: 0.9497 - 628ms/epoch - 17ms/step Epoch 7/20 36/36 - 1s - loss: 0.1209 - accuracy: 0.9538 - val_loss: 0.1278 - val_accuracy: 0.9522 - 643ms/epoch - 18ms/step Epoch 8/20 36/36 - 1s - loss: 0.1173 - accuracy: 0.9549 - val_loss: 0.1479 - val_accuracy: 0.9405 - 646ms/epoch - 18ms/step Epoch 9/20 36/36 - 1s - loss: 0.1097 - accuracy: 0.9557 - val_loss: 0.1478 - val_accuracy: 0.9446 - 584ms/epoch - 16ms/step Epoch 10/20 36/36 - 1s - loss: 0.1076 - accuracy: 0.9566 - val_loss: 0.1103 - val_accuracy: 0.9538 - 727ms/epoch - 20ms/step Epoch 11/20 36/36 - 1s - loss: 0.1032 - accuracy: 0.9603 - val_loss: 0.1135 - val_accuracy: 0.9578 - 968ms/epoch - 27ms/step Epoch 12/20 36/36 - 1s - loss: 0.0981 - accuracy: 0.9621 - val_loss: 0.1117 - val_accuracy: 0.9568 - 1s/epoch - 28ms/step Epoch 13/20 36/36 - 1s - loss: 0.0979 - accuracy: 0.9602 - val_loss: 0.1034 - val_accuracy: 0.9619 - 1s/epoch - 28ms/step Epoch 14/20 36/36 - 1s - loss: 0.0925 - accuracy: 0.9609 - val_loss: 0.1154 - val_accuracy: 0.9553 - 999ms/epoch - 28ms/step Epoch 15/20 36/36 - 1s - loss: 0.0912 - accuracy: 0.9621 - val_loss: 0.1017 - val_accuracy: 0.9604 - 831ms/epoch - 23ms/step Epoch 16/20 36/36 - 1s - loss: 0.0891 - accuracy: 0.9635 - val_loss: 0.0977 - val_accuracy: 0.9624 - 627ms/epoch - 17ms/step Epoch 17/20 36/36 - 1s - loss: 0.0859 - accuracy: 0.9640 - val_loss: 0.1026 - val_accuracy: 0.9578 - 614ms/epoch - 17ms/step Epoch 18/20 36/36 - 1s - loss: 0.0866 - accuracy: 0.9655 - val_loss: 0.0969 - val_accuracy: 0.9639 - 639ms/epoch - 18ms/step Epoch 19/20 36/36 - 1s - loss: 0.0853 - accuracy: 0.9645 - val_loss: 0.1106 - val_accuracy: 0.9548 - 608ms/epoch - 17ms/step Epoch 20/20 36/36 - 1s - loss: 0.0839 - accuracy: 0.9658 - val_loss: 0.0922 - val_accuracy: 0.9644 - 615ms/epoch - 17ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 128 | Dropout Rate 0.5 | FilterSize 3]: 0.9644308686256409 Epoch 1/20 36/36 - 2s - loss: 0.4027 - accuracy: 0.8263 - val_loss: 0.2564 - val_accuracy: 0.9126 - 2s/epoch - 42ms/step Epoch 2/20 36/36 - 1s - loss: 0.2064 - accuracy: 0.9265 - val_loss: 0.1817 - val_accuracy: 0.9380 - 612ms/epoch - 17ms/step Epoch 3/20 36/36 - 1s - loss: 0.1601 - accuracy: 0.9439 - val_loss: 0.1563 - val_accuracy: 0.9441 - 608ms/epoch - 17ms/step Epoch 4/20 36/36 - 1s - loss: 0.1466 - accuracy: 0.9469 - val_loss: 0.1438 - val_accuracy: 0.9472 - 579ms/epoch - 16ms/step Epoch 5/20 36/36 - 1s - loss: 0.1328 - accuracy: 0.9496 - val_loss: 0.1182 - val_accuracy: 0.9568 - 610ms/epoch - 17ms/step Epoch 6/20 36/36 - 1s - loss: 0.1256 - accuracy: 0.9522 - val_loss: 0.1275 - val_accuracy: 0.9502 - 583ms/epoch - 16ms/step Epoch 7/20 36/36 - 1s - loss: 0.1155 - accuracy: 0.9553 - val_loss: 0.1206 - val_accuracy: 0.9548 - 585ms/epoch - 16ms/step Epoch 8/20 36/36 - 1s - loss: 0.1089 - accuracy: 0.9575 - val_loss: 0.1109 - val_accuracy: 0.9548 - 571ms/epoch - 16ms/step Epoch 9/20 36/36 - 1s - loss: 0.1028 - accuracy: 0.9595 - val_loss: 0.1168 - val_accuracy: 0.9538 - 636ms/epoch - 18ms/step Epoch 10/20 36/36 - 1s - loss: 0.0980 - accuracy: 0.9607 - val_loss: 0.1048 - val_accuracy: 0.9583 - 890ms/epoch - 25ms/step Epoch 11/20 36/36 - 1s - loss: 0.0950 - accuracy: 0.9627 - val_loss: 0.1043 - val_accuracy: 0.9588 - 963ms/epoch - 27ms/step Epoch 12/20 36/36 - 1s - loss: 0.0933 - accuracy: 0.9634 - val_loss: 0.1154 - val_accuracy: 0.9548 - 944ms/epoch - 26ms/step Epoch 13/20 36/36 - 1s - loss: 0.0902 - accuracy: 0.9636 - val_loss: 0.1093 - val_accuracy: 0.9568 - 946ms/epoch - 26ms/step Epoch 14/20 36/36 - 1s - loss: 0.0866 - accuracy: 0.9648 - val_loss: 0.1085 - val_accuracy: 0.9548 - 954ms/epoch - 27ms/step Epoch 15/20 36/36 - 1s - loss: 0.0812 - accuracy: 0.9663 - val_loss: 0.0965 - val_accuracy: 0.9609 - 624ms/epoch - 17ms/step Epoch 16/20 36/36 - 1s - loss: 0.0807 - accuracy: 0.9668 - val_loss: 0.1069 - val_accuracy: 0.9563 - 597ms/epoch - 17ms/step Epoch 17/20 36/36 - 1s - loss: 0.0797 - accuracy: 0.9669 - val_loss: 0.0997 - val_accuracy: 0.9614 - 584ms/epoch - 16ms/step Epoch 18/20 36/36 - 1s - loss: 0.0787 - accuracy: 0.9684 - val_loss: 0.1032 - val_accuracy: 0.9573 - 584ms/epoch - 16ms/step Epoch 19/20 36/36 - 1s - loss: 0.0766 - accuracy: 0.9693 - val_loss: 0.0958 - val_accuracy: 0.9614 - 607ms/epoch - 17ms/step Epoch 20/20 36/36 - 1s - loss: 0.0789 - accuracy: 0.9659 - val_loss: 0.0969 - val_accuracy: 0.9634 - 619ms/epoch - 17ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 128 | Dropout Rate 0.5 | FilterSize 5]: 0.9634146094322205 Epoch 1/20 36/36 - 2s - loss: 0.4097 - accuracy: 0.8287 - val_loss: 0.2345 - val_accuracy: 0.9212 - 2s/epoch - 42ms/step Epoch 2/20 36/36 - 1s - loss: 0.2035 - accuracy: 0.9290 - val_loss: 0.1593 - val_accuracy: 0.9421 - 546ms/epoch - 15ms/step Epoch 3/20 36/36 - 1s - loss: 0.1558 - accuracy: 0.9443 - val_loss: 0.1213 - val_accuracy: 0.9568 - 514ms/epoch - 14ms/step Epoch 4/20 36/36 - 1s - loss: 0.1359 - accuracy: 0.9511 - val_loss: 0.1287 - val_accuracy: 0.9527 - 575ms/epoch - 16ms/step Epoch 5/20 36/36 - 1s - loss: 0.1255 - accuracy: 0.9527 - val_loss: 0.1256 - val_accuracy: 0.9573 - 526ms/epoch - 15ms/step Epoch 6/20 36/36 - 1s - loss: 0.1147 - accuracy: 0.9561 - val_loss: 0.1280 - val_accuracy: 0.9548 - 562ms/epoch - 16ms/step Epoch 7/20 36/36 - 1s - loss: 0.1043 - accuracy: 0.9596 - val_loss: 0.1235 - val_accuracy: 0.9517 - 576ms/epoch - 16ms/step Epoch 8/20 36/36 - 1s - loss: 0.1000 - accuracy: 0.9607 - val_loss: 0.1127 - val_accuracy: 0.9543 - 556ms/epoch - 15ms/step Epoch 9/20 36/36 - 1s - loss: 0.0925 - accuracy: 0.9629 - val_loss: 0.0954 - val_accuracy: 0.9629 - 572ms/epoch - 16ms/step Epoch 10/20 36/36 - 1s - loss: 0.0915 - accuracy: 0.9616 - val_loss: 0.0990 - val_accuracy: 0.9624 - 707ms/epoch - 20ms/step Epoch 11/20 36/36 - 1s - loss: 0.0871 - accuracy: 0.9644 - val_loss: 0.0977 - val_accuracy: 0.9609 - 894ms/epoch - 25ms/step Epoch 12/20 36/36 - 1s - loss: 0.0860 - accuracy: 0.9653 - val_loss: 0.0876 - val_accuracy: 0.9675 - 872ms/epoch - 24ms/step Epoch 13/20 36/36 - 1s - loss: 0.0854 - accuracy: 0.9641 - val_loss: 0.0931 - val_accuracy: 0.9665 - 870ms/epoch - 24ms/step Epoch 14/20 36/36 - 1s - loss: 0.0783 - accuracy: 0.9674 - val_loss: 0.0944 - val_accuracy: 0.9644 - 872ms/epoch - 24ms/step Epoch 15/20 36/36 - 1s - loss: 0.0779 - accuracy: 0.9682 - val_loss: 0.1030 - val_accuracy: 0.9599 - 869ms/epoch - 24ms/step Epoch 16/20 36/36 - 1s - loss: 0.0758 - accuracy: 0.9690 - val_loss: 0.0911 - val_accuracy: 0.9649 - 537ms/epoch - 15ms/step Epoch 17/20 36/36 - 1s - loss: 0.0749 - accuracy: 0.9691 - val_loss: 0.0882 - val_accuracy: 0.9685 - 559ms/epoch - 16ms/step Epoch 18/20 36/36 - 1s - loss: 0.0751 - accuracy: 0.9693 - val_loss: 0.1081 - val_accuracy: 0.9543 - 514ms/epoch - 14ms/step Epoch 19/20 36/36 - 1s - loss: 0.0684 - accuracy: 0.9721 - val_loss: 0.1088 - val_accuracy: 0.9533 - 543ms/epoch - 15ms/step Epoch 20/20 36/36 - 1s - loss: 0.0697 - accuracy: 0.9718 - val_loss: 0.0981 - val_accuracy: 0.9614 - 570ms/epoch - 16ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 128 | Dropout Rate 0.5 | FilterSize 7]: 0.9613820910453796 Epoch 1/20 36/36 - 2s - loss: 0.3937 - accuracy: 0.8423 - val_loss: 0.2358 - val_accuracy: 0.9182 - 2s/epoch - 67ms/step Epoch 2/20 36/36 - 1s - loss: 0.2096 - accuracy: 0.9271 - val_loss: 0.1738 - val_accuracy: 0.9385 - 991ms/epoch - 28ms/step Epoch 3/20 36/36 - 1s - loss: 0.1696 - accuracy: 0.9380 - val_loss: 0.1717 - val_accuracy: 0.9355 - 889ms/epoch - 25ms/step Epoch 4/20 36/36 - 1s - loss: 0.1593 - accuracy: 0.9428 - val_loss: 0.1366 - val_accuracy: 0.9477 - 617ms/epoch - 17ms/step Epoch 5/20 36/36 - 1s - loss: 0.1485 - accuracy: 0.9462 - val_loss: 0.1352 - val_accuracy: 0.9533 - 640ms/epoch - 18ms/step Epoch 6/20 36/36 - 1s - loss: 0.1390 - accuracy: 0.9475 - val_loss: 0.1368 - val_accuracy: 0.9487 - 626ms/epoch - 17ms/step Epoch 7/20 36/36 - 1s - loss: 0.1296 - accuracy: 0.9530 - val_loss: 0.1338 - val_accuracy: 0.9507 - 595ms/epoch - 17ms/step Epoch 8/20 36/36 - 1s - loss: 0.1257 - accuracy: 0.9512 - val_loss: 0.1126 - val_accuracy: 0.9583 - 635ms/epoch - 18ms/step Epoch 9/20 36/36 - 1s - loss: 0.1246 - accuracy: 0.9552 - val_loss: 0.1113 - val_accuracy: 0.9578 - 583ms/epoch - 16ms/step Epoch 10/20 36/36 - 1s - loss: 0.1151 - accuracy: 0.9558 - val_loss: 0.1219 - val_accuracy: 0.9497 - 618ms/epoch - 17ms/step Epoch 11/20 36/36 - 1s - loss: 0.1151 - accuracy: 0.9544 - val_loss: 0.1064 - val_accuracy: 0.9588 - 598ms/epoch - 17ms/step Epoch 12/20 36/36 - 1s - loss: 0.1109 - accuracy: 0.9555 - val_loss: 0.1070 - val_accuracy: 0.9588 - 613ms/epoch - 17ms/step Epoch 13/20 36/36 - 1s - loss: 0.1086 - accuracy: 0.9558 - val_loss: 0.1249 - val_accuracy: 0.9482 - 623ms/epoch - 17ms/step Epoch 14/20 36/36 - 1s - loss: 0.1066 - accuracy: 0.9577 - val_loss: 0.1335 - val_accuracy: 0.9472 - 585ms/epoch - 16ms/step Epoch 15/20 36/36 - 1s - loss: 0.1041 - accuracy: 0.9574 - val_loss: 0.1073 - val_accuracy: 0.9568 - 613ms/epoch - 17ms/step Epoch 16/20 36/36 - 1s - loss: 0.1004 - accuracy: 0.9609 - val_loss: 0.1089 - val_accuracy: 0.9553 - 596ms/epoch - 17ms/step Epoch 17/20 36/36 - 1s - loss: 0.0997 - accuracy: 0.9604 - val_loss: 0.0987 - val_accuracy: 0.9619 - 632ms/epoch - 18ms/step Epoch 18/20 36/36 - 1s - loss: 0.0995 - accuracy: 0.9590 - val_loss: 0.0999 - val_accuracy: 0.9619 - 593ms/epoch - 16ms/step Epoch 19/20 36/36 - 1s - loss: 0.0968 - accuracy: 0.9626 - val_loss: 0.1026 - val_accuracy: 0.9583 - 619ms/epoch - 17ms/step Epoch 20/20 36/36 - 1s - loss: 0.0944 - accuracy: 0.9610 - val_loss: 0.0992 - val_accuracy: 0.9619 - 971ms/epoch - 27ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 128 | Dropout Rate 0.7 | FilterSize 3]: 0.9618902206420898 Epoch 1/20 36/36 - 2s - loss: 0.3899 - accuracy: 0.8354 - val_loss: 0.2355 - val_accuracy: 0.9273 - 2s/epoch - 65ms/step Epoch 2/20 36/36 - 1s - loss: 0.2114 - accuracy: 0.9256 - val_loss: 0.1716 - val_accuracy: 0.9416 - 945ms/epoch - 26ms/step Epoch 3/20 36/36 - 1s - loss: 0.1705 - accuracy: 0.9379 - val_loss: 0.1771 - val_accuracy: 0.9370 - 655ms/epoch - 18ms/step Epoch 4/20 36/36 - 1s - loss: 0.1511 - accuracy: 0.9444 - val_loss: 0.1304 - val_accuracy: 0.9497 - 625ms/epoch - 17ms/step Epoch 5/20 36/36 - 1s - loss: 0.1400 - accuracy: 0.9483 - val_loss: 0.1294 - val_accuracy: 0.9517 - 602ms/epoch - 17ms/step Epoch 6/20 36/36 - 1s - loss: 0.1293 - accuracy: 0.9525 - val_loss: 0.1187 - val_accuracy: 0.9553 - 584ms/epoch - 16ms/step Epoch 7/20 36/36 - 1s - loss: 0.1202 - accuracy: 0.9545 - val_loss: 0.1210 - val_accuracy: 0.9527 - 607ms/epoch - 17ms/step Epoch 8/20 36/36 - 1s - loss: 0.1159 - accuracy: 0.9563 - val_loss: 0.1193 - val_accuracy: 0.9558 - 583ms/epoch - 16ms/step Epoch 9/20 36/36 - 1s - loss: 0.1105 - accuracy: 0.9577 - val_loss: 0.1214 - val_accuracy: 0.9497 - 605ms/epoch - 17ms/step Epoch 10/20 36/36 - 1s - loss: 0.1055 - accuracy: 0.9592 - val_loss: 0.0985 - val_accuracy: 0.9624 - 584ms/epoch - 16ms/step Epoch 11/20 36/36 - 1s - loss: 0.1080 - accuracy: 0.9573 - val_loss: 0.1158 - val_accuracy: 0.9502 - 578ms/epoch - 16ms/step Epoch 12/20 36/36 - 1s - loss: 0.0979 - accuracy: 0.9604 - val_loss: 0.1068 - val_accuracy: 0.9568 - 562ms/epoch - 16ms/step Epoch 13/20 36/36 - 1s - loss: 0.0996 - accuracy: 0.9606 - val_loss: 0.1067 - val_accuracy: 0.9563 - 561ms/epoch - 16ms/step Epoch 14/20 36/36 - 1s - loss: 0.0957 - accuracy: 0.9611 - val_loss: 0.1142 - val_accuracy: 0.9522 - 561ms/epoch - 16ms/step Epoch 15/20 36/36 - 1s - loss: 0.0933 - accuracy: 0.9638 - val_loss: 0.0949 - val_accuracy: 0.9649 - 599ms/epoch - 17ms/step Epoch 16/20 36/36 - 1s - loss: 0.0950 - accuracy: 0.9597 - val_loss: 0.1105 - val_accuracy: 0.9502 - 578ms/epoch - 16ms/step Epoch 17/20 36/36 - 1s - loss: 0.0930 - accuracy: 0.9608 - val_loss: 0.1050 - val_accuracy: 0.9588 - 570ms/epoch - 16ms/step Epoch 18/20 36/36 - 1s - loss: 0.0878 - accuracy: 0.9651 - val_loss: 0.0981 - val_accuracy: 0.9614 - 615ms/epoch - 17ms/step Epoch 19/20 36/36 - 1s - loss: 0.0860 - accuracy: 0.9658 - val_loss: 0.0966 - val_accuracy: 0.9593 - 607ms/epoch - 17ms/step Epoch 20/20 36/36 - 1s - loss: 0.0836 - accuracy: 0.9662 - val_loss: 0.1022 - val_accuracy: 0.9593 - 949ms/epoch - 26ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 128 | Dropout Rate 0.7 | FilterSize 5]: 0.9593495726585388 Epoch 1/20 36/36 - 2s - loss: 0.4132 - accuracy: 0.8230 - val_loss: 0.2276 - val_accuracy: 0.9212 - 2s/epoch - 43ms/step Epoch 2/20 36/36 - 1s - loss: 0.2196 - accuracy: 0.9199 - val_loss: 0.1756 - val_accuracy: 0.9365 - 521ms/epoch - 14ms/step Epoch 3/20 36/36 - 1s - loss: 0.1682 - accuracy: 0.9397 - val_loss: 0.1351 - val_accuracy: 0.9492 - 522ms/epoch - 14ms/step Epoch 4/20 36/36 - 1s - loss: 0.1520 - accuracy: 0.9448 - val_loss: 0.1653 - val_accuracy: 0.9355 - 553ms/epoch - 15ms/step Epoch 5/20 36/36 - 1s - loss: 0.1423 - accuracy: 0.9474 - val_loss: 0.1215 - val_accuracy: 0.9558 - 548ms/epoch - 15ms/step Epoch 6/20 36/36 - 1s - loss: 0.1258 - accuracy: 0.9531 - val_loss: 0.1087 - val_accuracy: 0.9578 - 556ms/epoch - 15ms/step Epoch 7/20 36/36 - 1s - loss: 0.1177 - accuracy: 0.9555 - val_loss: 0.1009 - val_accuracy: 0.9599 - 513ms/epoch - 14ms/step Epoch 8/20 36/36 - 1s - loss: 0.1143 - accuracy: 0.9567 - val_loss: 0.1082 - val_accuracy: 0.9573 - 572ms/epoch - 16ms/step Epoch 9/20 36/36 - 1s - loss: 0.1087 - accuracy: 0.9570 - val_loss: 0.1076 - val_accuracy: 0.9563 - 532ms/epoch - 15ms/step Epoch 10/20 36/36 - 1s - loss: 0.1034 - accuracy: 0.9586 - val_loss: 0.1123 - val_accuracy: 0.9543 - 543ms/epoch - 15ms/step Epoch 11/20 36/36 - 1s - loss: 0.0999 - accuracy: 0.9607 - val_loss: 0.1165 - val_accuracy: 0.9517 - 630ms/epoch - 17ms/step Epoch 12/20 36/36 - 1s - loss: 0.0980 - accuracy: 0.9612 - val_loss: 0.1152 - val_accuracy: 0.9507 - 877ms/epoch - 24ms/step Epoch 13/20 36/36 - 1s - loss: 0.0922 - accuracy: 0.9628 - val_loss: 0.0997 - val_accuracy: 0.9604 - 862ms/epoch - 24ms/step Epoch 14/20 36/36 - 1s - loss: 0.0908 - accuracy: 0.9638 - val_loss: 0.1221 - val_accuracy: 0.9466 - 844ms/epoch - 23ms/step Epoch 15/20 36/36 - 1s - loss: 0.0866 - accuracy: 0.9648 - val_loss: 0.0987 - val_accuracy: 0.9634 - 845ms/epoch - 23ms/step Epoch 16/20 36/36 - 1s - loss: 0.0880 - accuracy: 0.9653 - val_loss: 0.0960 - val_accuracy: 0.9614 - 860ms/epoch - 24ms/step Epoch 17/20 36/36 - 1s - loss: 0.0801 - accuracy: 0.9673 - val_loss: 0.1007 - val_accuracy: 0.9619 - 607ms/epoch - 17ms/step Epoch 18/20 36/36 - 1s - loss: 0.0838 - accuracy: 0.9656 - val_loss: 0.1038 - val_accuracy: 0.9558 - 523ms/epoch - 15ms/step Epoch 19/20 36/36 - 1s - loss: 0.0803 - accuracy: 0.9678 - val_loss: 0.0990 - val_accuracy: 0.9558 - 511ms/epoch - 14ms/step Epoch 20/20 36/36 - 1s - loss: 0.0803 - accuracy: 0.9677 - val_loss: 0.0940 - val_accuracy: 0.9639 - 528ms/epoch - 15ms/step Validation accuracy for Model of [Learning Rate 0.001 | Num Filters 128 | Dropout Rate 0.7 | FilterSize 7]: 0.9639227390289307 Epoch 1/20 36/36 - 2s - loss: 0.2589 - accuracy: 0.8929 - val_loss: 0.1716 - val_accuracy: 0.9360 - 2s/epoch - 49ms/step Epoch 2/20 36/36 - 0s - loss: 0.1376 - accuracy: 0.9454 - val_loss: 0.1164 - val_accuracy: 0.9583 - 434ms/epoch - 12ms/step Epoch 3/20 36/36 - 0s - loss: 0.1157 - accuracy: 0.9523 - val_loss: 0.1039 - val_accuracy: 0.9604 - 451ms/epoch - 13ms/step Epoch 4/20 36/36 - 0s - loss: 0.1129 - accuracy: 0.9537 - val_loss: 0.0864 - val_accuracy: 0.9695 - 449ms/epoch - 12ms/step Epoch 5/20 36/36 - 0s - loss: 0.1015 - accuracy: 0.9597 - val_loss: 0.1105 - val_accuracy: 0.9543 - 416ms/epoch - 12ms/step Epoch 6/20 36/36 - 0s - loss: 0.0914 - accuracy: 0.9613 - val_loss: 0.0992 - val_accuracy: 0.9634 - 429ms/epoch - 12ms/step Epoch 7/20 36/36 - 0s - loss: 0.0804 - accuracy: 0.9664 - val_loss: 0.0828 - val_accuracy: 0.9695 - 466ms/epoch - 13ms/step Epoch 8/20 36/36 - 0s - loss: 0.0860 - accuracy: 0.9649 - val_loss: 0.0970 - val_accuracy: 0.9604 - 473ms/epoch - 13ms/step Epoch 9/20 36/36 - 0s - loss: 0.0761 - accuracy: 0.9691 - val_loss: 0.0985 - val_accuracy: 0.9654 - 284ms/epoch - 8ms/step Epoch 10/20 36/36 - 0s - loss: 0.0743 - accuracy: 0.9699 - val_loss: 0.1009 - val_accuracy: 0.9573 - 288ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.0752 - accuracy: 0.9688 - val_loss: 0.1235 - val_accuracy: 0.9583 - 280ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.0731 - accuracy: 0.9707 - val_loss: 0.0813 - val_accuracy: 0.9746 - 288ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.0645 - accuracy: 0.9751 - val_loss: 0.0792 - val_accuracy: 0.9705 - 279ms/epoch - 8ms/step Epoch 14/20 36/36 - 0s - loss: 0.0612 - accuracy: 0.9754 - val_loss: 0.0875 - val_accuracy: 0.9705 - 294ms/epoch - 8ms/step Epoch 15/20 36/36 - 0s - loss: 0.0708 - accuracy: 0.9728 - val_loss: 0.0846 - val_accuracy: 0.9705 - 296ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.0694 - accuracy: 0.9724 - val_loss: 0.0885 - val_accuracy: 0.9639 - 277ms/epoch - 8ms/step Epoch 17/20 36/36 - 0s - loss: 0.0652 - accuracy: 0.9733 - val_loss: 0.0770 - val_accuracy: 0.9736 - 298ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.0641 - accuracy: 0.9750 - val_loss: 0.0840 - val_accuracy: 0.9746 - 286ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.0600 - accuracy: 0.9772 - val_loss: 0.0870 - val_accuracy: 0.9710 - 338ms/epoch - 9ms/step Epoch 20/20 36/36 - 0s - loss: 0.0586 - accuracy: 0.9763 - val_loss: 0.1006 - val_accuracy: 0.9705 - 278ms/epoch - 8ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 32 | Dropout Rate 0.3 | FilterSize 3]: 0.9705284833908081 Epoch 1/20 36/36 - 1s - loss: 0.2832 - accuracy: 0.8812 - val_loss: 0.1524 - val_accuracy: 0.9416 - 1s/epoch - 36ms/step Epoch 2/20 36/36 - 0s - loss: 0.1367 - accuracy: 0.9469 - val_loss: 0.1263 - val_accuracy: 0.9527 - 294ms/epoch - 8ms/step Epoch 3/20 36/36 - 0s - loss: 0.1115 - accuracy: 0.9559 - val_loss: 0.1369 - val_accuracy: 0.9492 - 276ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.0988 - accuracy: 0.9606 - val_loss: 0.1184 - val_accuracy: 0.9543 - 298ms/epoch - 8ms/step Epoch 5/20 36/36 - 0s - loss: 0.0887 - accuracy: 0.9637 - val_loss: 0.0930 - val_accuracy: 0.9660 - 281ms/epoch - 8ms/step Epoch 6/20 36/36 - 0s - loss: 0.0787 - accuracy: 0.9668 - val_loss: 0.0930 - val_accuracy: 0.9619 - 290ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.0808 - accuracy: 0.9658 - val_loss: 0.0803 - val_accuracy: 0.9721 - 285ms/epoch - 8ms/step Epoch 8/20 36/36 - 0s - loss: 0.0782 - accuracy: 0.9693 - val_loss: 0.1051 - val_accuracy: 0.9578 - 278ms/epoch - 8ms/step Epoch 9/20 36/36 - 0s - loss: 0.0705 - accuracy: 0.9704 - val_loss: 0.0925 - val_accuracy: 0.9649 - 281ms/epoch - 8ms/step Epoch 10/20 36/36 - 0s - loss: 0.0705 - accuracy: 0.9707 - val_loss: 0.0828 - val_accuracy: 0.9741 - 289ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.0670 - accuracy: 0.9727 - val_loss: 0.1256 - val_accuracy: 0.9461 - 298ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.0655 - accuracy: 0.9739 - val_loss: 0.0973 - val_accuracy: 0.9634 - 287ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.0639 - accuracy: 0.9758 - val_loss: 0.0877 - val_accuracy: 0.9680 - 265ms/epoch - 7ms/step Epoch 14/20 36/36 - 0s - loss: 0.0619 - accuracy: 0.9759 - val_loss: 0.0902 - val_accuracy: 0.9721 - 263ms/epoch - 7ms/step Epoch 15/20 36/36 - 0s - loss: 0.0638 - accuracy: 0.9759 - val_loss: 0.0815 - val_accuracy: 0.9776 - 299ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.0659 - accuracy: 0.9740 - val_loss: 0.0942 - val_accuracy: 0.9680 - 276ms/epoch - 8ms/step Epoch 17/20 36/36 - 0s - loss: 0.0546 - accuracy: 0.9783 - val_loss: 0.0807 - val_accuracy: 0.9756 - 331ms/epoch - 9ms/step Epoch 18/20 36/36 - 0s - loss: 0.0510 - accuracy: 0.9807 - val_loss: 0.0936 - val_accuracy: 0.9715 - 338ms/epoch - 9ms/step Epoch 19/20 36/36 - 0s - loss: 0.0647 - accuracy: 0.9776 - val_loss: 0.0930 - val_accuracy: 0.9766 - 398ms/epoch - 11ms/step Epoch 20/20 36/36 - 0s - loss: 0.0538 - accuracy: 0.9780 - val_loss: 0.0912 - val_accuracy: 0.9726 - 433ms/epoch - 12ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 32 | Dropout Rate 0.3 | FilterSize 5]: 0.9725610017776489 Epoch 1/20 36/36 - 2s - loss: 0.2549 - accuracy: 0.8934 - val_loss: 0.1255 - val_accuracy: 0.9517 - 2s/epoch - 52ms/step Epoch 2/20 36/36 - 0s - loss: 0.1162 - accuracy: 0.9560 - val_loss: 0.1049 - val_accuracy: 0.9578 - 435ms/epoch - 12ms/step Epoch 3/20 36/36 - 0s - loss: 0.0987 - accuracy: 0.9600 - val_loss: 0.1091 - val_accuracy: 0.9568 - 436ms/epoch - 12ms/step Epoch 4/20 36/36 - 0s - loss: 0.0921 - accuracy: 0.9621 - val_loss: 0.0950 - val_accuracy: 0.9649 - 464ms/epoch - 13ms/step Epoch 5/20 36/36 - 0s - loss: 0.0840 - accuracy: 0.9656 - val_loss: 0.1284 - val_accuracy: 0.9477 - 401ms/epoch - 11ms/step Epoch 6/20 36/36 - 0s - loss: 0.0824 - accuracy: 0.9654 - val_loss: 0.1145 - val_accuracy: 0.9548 - 279ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.0724 - accuracy: 0.9705 - val_loss: 0.0901 - val_accuracy: 0.9715 - 278ms/epoch - 8ms/step Epoch 8/20 36/36 - 0s - loss: 0.0727 - accuracy: 0.9705 - val_loss: 0.0912 - val_accuracy: 0.9731 - 278ms/epoch - 8ms/step Epoch 9/20 36/36 - 0s - loss: 0.0629 - accuracy: 0.9747 - val_loss: 0.0970 - val_accuracy: 0.9609 - 271ms/epoch - 8ms/step Epoch 10/20 36/36 - 0s - loss: 0.0617 - accuracy: 0.9744 - val_loss: 0.1033 - val_accuracy: 0.9660 - 279ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.0682 - accuracy: 0.9739 - val_loss: 0.0961 - val_accuracy: 0.9644 - 274ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.0614 - accuracy: 0.9761 - val_loss: 0.1031 - val_accuracy: 0.9670 - 314ms/epoch - 9ms/step Epoch 13/20 36/36 - 0s - loss: 0.0681 - accuracy: 0.9728 - val_loss: 0.1066 - val_accuracy: 0.9573 - 320ms/epoch - 9ms/step Epoch 14/20 36/36 - 0s - loss: 0.0641 - accuracy: 0.9748 - val_loss: 0.0880 - val_accuracy: 0.9705 - 282ms/epoch - 8ms/step Epoch 15/20 36/36 - 0s - loss: 0.0558 - accuracy: 0.9793 - val_loss: 0.1072 - val_accuracy: 0.9624 - 275ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.0521 - accuracy: 0.9804 - val_loss: 0.0976 - val_accuracy: 0.9690 - 317ms/epoch - 9ms/step Epoch 17/20 36/36 - 0s - loss: 0.0544 - accuracy: 0.9785 - val_loss: 0.0940 - val_accuracy: 0.9715 - 289ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.0527 - accuracy: 0.9778 - val_loss: 0.0946 - val_accuracy: 0.9670 - 267ms/epoch - 7ms/step Epoch 19/20 36/36 - 0s - loss: 0.0551 - accuracy: 0.9787 - val_loss: 0.0921 - val_accuracy: 0.9766 - 268ms/epoch - 7ms/step Epoch 20/20 36/36 - 0s - loss: 0.0529 - accuracy: 0.9800 - val_loss: 0.0869 - val_accuracy: 0.9741 - 286ms/epoch - 8ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 32 | Dropout Rate 0.3 | FilterSize 7]: 0.9740853905677795 Epoch 1/20 36/36 - 1s - loss: 0.2476 - accuracy: 0.9012 - val_loss: 0.1359 - val_accuracy: 0.9538 - 1s/epoch - 35ms/step Epoch 2/20 36/36 - 0s - loss: 0.1480 - accuracy: 0.9445 - val_loss: 0.1480 - val_accuracy: 0.9466 - 334ms/epoch - 9ms/step Epoch 3/20 36/36 - 0s - loss: 0.1287 - accuracy: 0.9519 - val_loss: 0.1241 - val_accuracy: 0.9487 - 281ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.1101 - accuracy: 0.9556 - val_loss: 0.1200 - val_accuracy: 0.9517 - 383ms/epoch - 11ms/step Epoch 5/20 36/36 - 0s - loss: 0.1006 - accuracy: 0.9586 - val_loss: 0.0878 - val_accuracy: 0.9665 - 481ms/epoch - 13ms/step Epoch 6/20 36/36 - 0s - loss: 0.1032 - accuracy: 0.9578 - val_loss: 0.0964 - val_accuracy: 0.9644 - 425ms/epoch - 12ms/step Epoch 7/20 36/36 - 0s - loss: 0.0995 - accuracy: 0.9590 - val_loss: 0.0979 - val_accuracy: 0.9619 - 425ms/epoch - 12ms/step Epoch 8/20 36/36 - 0s - loss: 0.0947 - accuracy: 0.9622 - val_loss: 0.1049 - val_accuracy: 0.9573 - 465ms/epoch - 13ms/step Epoch 9/20 36/36 - 0s - loss: 0.0874 - accuracy: 0.9636 - val_loss: 0.0906 - val_accuracy: 0.9700 - 469ms/epoch - 13ms/step Epoch 10/20 36/36 - 0s - loss: 0.0897 - accuracy: 0.9638 - val_loss: 0.1165 - val_accuracy: 0.9548 - 488ms/epoch - 14ms/step Epoch 11/20 36/36 - 0s - loss: 0.0840 - accuracy: 0.9663 - val_loss: 0.0850 - val_accuracy: 0.9700 - 474ms/epoch - 13ms/step Epoch 12/20 36/36 - 0s - loss: 0.0826 - accuracy: 0.9671 - val_loss: 0.0852 - val_accuracy: 0.9726 - 449ms/epoch - 12ms/step Epoch 13/20 36/36 - 0s - loss: 0.0779 - accuracy: 0.9673 - val_loss: 0.0981 - val_accuracy: 0.9660 - 469ms/epoch - 13ms/step Epoch 14/20 36/36 - 0s - loss: 0.0772 - accuracy: 0.9681 - val_loss: 0.0939 - val_accuracy: 0.9710 - 417ms/epoch - 12ms/step Epoch 15/20 36/36 - 0s - loss: 0.0759 - accuracy: 0.9683 - val_loss: 0.0902 - val_accuracy: 0.9700 - 278ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.0791 - accuracy: 0.9670 - val_loss: 0.0978 - val_accuracy: 0.9690 - 277ms/epoch - 8ms/step Epoch 17/20 36/36 - 0s - loss: 0.0726 - accuracy: 0.9711 - val_loss: 0.0993 - val_accuracy: 0.9670 - 282ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.0697 - accuracy: 0.9718 - val_loss: 0.0956 - val_accuracy: 0.9685 - 299ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.0706 - accuracy: 0.9697 - val_loss: 0.0873 - val_accuracy: 0.9705 - 298ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.0757 - accuracy: 0.9699 - val_loss: 0.1016 - val_accuracy: 0.9639 - 287ms/epoch - 8ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 32 | Dropout Rate 0.5 | FilterSize 3]: 0.9639227390289307 Epoch 1/20 36/36 - 1s - loss: 0.2639 - accuracy: 0.8904 - val_loss: 0.1391 - val_accuracy: 0.9451 - 1s/epoch - 35ms/step Epoch 2/20 36/36 - 0s - loss: 0.1404 - accuracy: 0.9454 - val_loss: 0.1027 - val_accuracy: 0.9588 - 284ms/epoch - 8ms/step Epoch 3/20 36/36 - 0s - loss: 0.1171 - accuracy: 0.9540 - val_loss: 0.1108 - val_accuracy: 0.9604 - 281ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.1158 - accuracy: 0.9522 - val_loss: 0.1001 - val_accuracy: 0.9563 - 275ms/epoch - 8ms/step Epoch 5/20 36/36 - 0s - loss: 0.1027 - accuracy: 0.9580 - val_loss: 0.0992 - val_accuracy: 0.9604 - 291ms/epoch - 8ms/step Epoch 6/20 36/36 - 0s - loss: 0.0963 - accuracy: 0.9588 - val_loss: 0.0995 - val_accuracy: 0.9604 - 270ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.0927 - accuracy: 0.9615 - val_loss: 0.0969 - val_accuracy: 0.9593 - 290ms/epoch - 8ms/step Epoch 8/20 36/36 - 0s - loss: 0.0923 - accuracy: 0.9626 - val_loss: 0.1167 - val_accuracy: 0.9507 - 292ms/epoch - 8ms/step Epoch 9/20 36/36 - 0s - loss: 0.0873 - accuracy: 0.9632 - val_loss: 0.0960 - val_accuracy: 0.9654 - 278ms/epoch - 8ms/step Epoch 10/20 36/36 - 0s - loss: 0.0881 - accuracy: 0.9616 - val_loss: 0.1106 - val_accuracy: 0.9497 - 267ms/epoch - 7ms/step Epoch 11/20 36/36 - 0s - loss: 0.0800 - accuracy: 0.9674 - val_loss: 0.0927 - val_accuracy: 0.9644 - 270ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.0814 - accuracy: 0.9676 - val_loss: 0.0953 - val_accuracy: 0.9649 - 297ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.0782 - accuracy: 0.9676 - val_loss: 0.0930 - val_accuracy: 0.9665 - 277ms/epoch - 8ms/step Epoch 14/20 36/36 - 0s - loss: 0.0741 - accuracy: 0.9691 - val_loss: 0.0918 - val_accuracy: 0.9731 - 267ms/epoch - 7ms/step Epoch 15/20 36/36 - 0s - loss: 0.0731 - accuracy: 0.9699 - val_loss: 0.0908 - val_accuracy: 0.9685 - 306ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.0691 - accuracy: 0.9731 - val_loss: 0.0903 - val_accuracy: 0.9695 - 312ms/epoch - 9ms/step Epoch 17/20 36/36 - 0s - loss: 0.0715 - accuracy: 0.9703 - val_loss: 0.0893 - val_accuracy: 0.9736 - 274ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.0690 - accuracy: 0.9727 - val_loss: 0.0936 - val_accuracy: 0.9741 - 278ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.0727 - accuracy: 0.9685 - val_loss: 0.0936 - val_accuracy: 0.9639 - 294ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.0702 - accuracy: 0.9724 - val_loss: 0.1049 - val_accuracy: 0.9558 - 281ms/epoch - 8ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 32 | Dropout Rate 0.5 | FilterSize 5]: 0.9557926654815674 Epoch 1/20 36/36 - 1s - loss: 0.2773 - accuracy: 0.8814 - val_loss: 0.1592 - val_accuracy: 0.9421 - 1s/epoch - 37ms/step Epoch 2/20 36/36 - 0s - loss: 0.1434 - accuracy: 0.9464 - val_loss: 0.1312 - val_accuracy: 0.9533 - 468ms/epoch - 13ms/step Epoch 3/20 36/36 - 0s - loss: 0.1182 - accuracy: 0.9539 - val_loss: 0.1312 - val_accuracy: 0.9507 - 429ms/epoch - 12ms/step Epoch 4/20 36/36 - 0s - loss: 0.1069 - accuracy: 0.9583 - val_loss: 0.1183 - val_accuracy: 0.9563 - 443ms/epoch - 12ms/step Epoch 5/20 36/36 - 0s - loss: 0.1037 - accuracy: 0.9580 - val_loss: 0.1168 - val_accuracy: 0.9553 - 425ms/epoch - 12ms/step Epoch 6/20 36/36 - 0s - loss: 0.0927 - accuracy: 0.9635 - val_loss: 0.0923 - val_accuracy: 0.9665 - 453ms/epoch - 13ms/step Epoch 7/20 36/36 - 0s - loss: 0.0894 - accuracy: 0.9647 - val_loss: 0.1038 - val_accuracy: 0.9583 - 461ms/epoch - 13ms/step Epoch 8/20 36/36 - 0s - loss: 0.0905 - accuracy: 0.9641 - val_loss: 0.0997 - val_accuracy: 0.9665 - 426ms/epoch - 12ms/step Epoch 9/20 36/36 - 0s - loss: 0.0836 - accuracy: 0.9665 - val_loss: 0.1027 - val_accuracy: 0.9614 - 429ms/epoch - 12ms/step Epoch 10/20 36/36 - 0s - loss: 0.0841 - accuracy: 0.9649 - val_loss: 0.0948 - val_accuracy: 0.9665 - 451ms/epoch - 13ms/step Epoch 11/20 36/36 - 0s - loss: 0.0773 - accuracy: 0.9690 - val_loss: 0.0922 - val_accuracy: 0.9690 - 436ms/epoch - 12ms/step Epoch 12/20 36/36 - 0s - loss: 0.0754 - accuracy: 0.9702 - val_loss: 0.0875 - val_accuracy: 0.9665 - 271ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.0739 - accuracy: 0.9689 - val_loss: 0.0947 - val_accuracy: 0.9670 - 269ms/epoch - 7ms/step Epoch 14/20 36/36 - 0s - loss: 0.0743 - accuracy: 0.9707 - val_loss: 0.0990 - val_accuracy: 0.9649 - 269ms/epoch - 7ms/step Epoch 15/20 36/36 - 0s - loss: 0.0704 - accuracy: 0.9718 - val_loss: 0.1071 - val_accuracy: 0.9527 - 270ms/epoch - 7ms/step Epoch 16/20 36/36 - 0s - loss: 0.0712 - accuracy: 0.9711 - val_loss: 0.0923 - val_accuracy: 0.9695 - 268ms/epoch - 7ms/step Epoch 17/20 36/36 - 0s - loss: 0.0730 - accuracy: 0.9699 - val_loss: 0.0982 - val_accuracy: 0.9690 - 308ms/epoch - 9ms/step Epoch 18/20 36/36 - 0s - loss: 0.0652 - accuracy: 0.9739 - val_loss: 0.0872 - val_accuracy: 0.9756 - 270ms/epoch - 7ms/step Epoch 19/20 36/36 - 0s - loss: 0.0642 - accuracy: 0.9738 - val_loss: 0.0927 - val_accuracy: 0.9705 - 272ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.0629 - accuracy: 0.9733 - val_loss: 0.0897 - val_accuracy: 0.9741 - 258ms/epoch - 7ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 32 | Dropout Rate 0.5 | FilterSize 7]: 0.9740853905677795 Epoch 1/20 36/36 - 1s - loss: 0.2693 - accuracy: 0.8876 - val_loss: 0.1379 - val_accuracy: 0.9507 - 1s/epoch - 35ms/step Epoch 2/20 36/36 - 0s - loss: 0.1687 - accuracy: 0.9378 - val_loss: 0.1484 - val_accuracy: 0.9487 - 298ms/epoch - 8ms/step Epoch 3/20 36/36 - 0s - loss: 0.1453 - accuracy: 0.9440 - val_loss: 0.1227 - val_accuracy: 0.9512 - 280ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.1334 - accuracy: 0.9472 - val_loss: 0.1290 - val_accuracy: 0.9482 - 286ms/epoch - 8ms/step Epoch 5/20 36/36 - 0s - loss: 0.1277 - accuracy: 0.9471 - val_loss: 0.1093 - val_accuracy: 0.9548 - 309ms/epoch - 9ms/step Epoch 6/20 36/36 - 0s - loss: 0.1223 - accuracy: 0.9506 - val_loss: 0.1176 - val_accuracy: 0.9533 - 288ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.1219 - accuracy: 0.9512 - val_loss: 0.1122 - val_accuracy: 0.9553 - 289ms/epoch - 8ms/step Epoch 8/20 36/36 - 0s - loss: 0.1243 - accuracy: 0.9478 - val_loss: 0.1151 - val_accuracy: 0.9512 - 301ms/epoch - 8ms/step Epoch 9/20 36/36 - 0s - loss: 0.1121 - accuracy: 0.9553 - val_loss: 0.1080 - val_accuracy: 0.9593 - 289ms/epoch - 8ms/step Epoch 10/20 36/36 - 0s - loss: 0.1031 - accuracy: 0.9581 - val_loss: 0.1004 - val_accuracy: 0.9593 - 276ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.1052 - accuracy: 0.9569 - val_loss: 0.1033 - val_accuracy: 0.9644 - 287ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.1033 - accuracy: 0.9589 - val_loss: 0.1051 - val_accuracy: 0.9593 - 303ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.0996 - accuracy: 0.9597 - val_loss: 0.0941 - val_accuracy: 0.9639 - 299ms/epoch - 8ms/step Epoch 14/20 36/36 - 0s - loss: 0.0997 - accuracy: 0.9607 - val_loss: 0.1092 - val_accuracy: 0.9588 - 285ms/epoch - 8ms/step Epoch 15/20 36/36 - 0s - loss: 0.0994 - accuracy: 0.9591 - val_loss: 0.1095 - val_accuracy: 0.9588 - 281ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.0960 - accuracy: 0.9596 - val_loss: 0.0954 - val_accuracy: 0.9644 - 326ms/epoch - 9ms/step Epoch 17/20 36/36 - 0s - loss: 0.0991 - accuracy: 0.9591 - val_loss: 0.1032 - val_accuracy: 0.9604 - 292ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.0950 - accuracy: 0.9607 - val_loss: 0.1153 - val_accuracy: 0.9573 - 297ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.0919 - accuracy: 0.9622 - val_loss: 0.0906 - val_accuracy: 0.9700 - 290ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.0948 - accuracy: 0.9629 - val_loss: 0.1006 - val_accuracy: 0.9614 - 284ms/epoch - 8ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 32 | Dropout Rate 0.7 | FilterSize 3]: 0.9613820910453796 Epoch 1/20 36/36 - 1s - loss: 0.3179 - accuracy: 0.8578 - val_loss: 0.1520 - val_accuracy: 0.9451 - 1s/epoch - 34ms/step Epoch 2/20 36/36 - 0s - loss: 0.1584 - accuracy: 0.9401 - val_loss: 0.1358 - val_accuracy: 0.9507 - 278ms/epoch - 8ms/step Epoch 3/20 36/36 - 0s - loss: 0.1416 - accuracy: 0.9466 - val_loss: 0.1217 - val_accuracy: 0.9517 - 296ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.1307 - accuracy: 0.9473 - val_loss: 0.1123 - val_accuracy: 0.9558 - 283ms/epoch - 8ms/step Epoch 5/20 36/36 - 0s - loss: 0.1185 - accuracy: 0.9542 - val_loss: 0.1190 - val_accuracy: 0.9492 - 292ms/epoch - 8ms/step Epoch 6/20 36/36 - 0s - loss: 0.1122 - accuracy: 0.9569 - val_loss: 0.1144 - val_accuracy: 0.9533 - 294ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.1113 - accuracy: 0.9546 - val_loss: 0.1136 - val_accuracy: 0.9517 - 327ms/epoch - 9ms/step Epoch 8/20 36/36 - 0s - loss: 0.1172 - accuracy: 0.9535 - val_loss: 0.1123 - val_accuracy: 0.9522 - 287ms/epoch - 8ms/step Epoch 9/20 36/36 - 0s - loss: 0.1052 - accuracy: 0.9578 - val_loss: 0.1052 - val_accuracy: 0.9609 - 325ms/epoch - 9ms/step Epoch 10/20 36/36 - 0s - loss: 0.1011 - accuracy: 0.9563 - val_loss: 0.1016 - val_accuracy: 0.9634 - 324ms/epoch - 9ms/step Epoch 11/20 36/36 - 0s - loss: 0.1008 - accuracy: 0.9608 - val_loss: 0.1058 - val_accuracy: 0.9543 - 298ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.1025 - accuracy: 0.9575 - val_loss: 0.0966 - val_accuracy: 0.9639 - 320ms/epoch - 9ms/step Epoch 13/20 36/36 - 0s - loss: 0.1004 - accuracy: 0.9601 - val_loss: 0.1013 - val_accuracy: 0.9573 - 282ms/epoch - 8ms/step Epoch 14/20 36/36 - 0s - loss: 0.1020 - accuracy: 0.9590 - val_loss: 0.0930 - val_accuracy: 0.9690 - 300ms/epoch - 8ms/step Epoch 15/20 36/36 - 0s - loss: 0.1027 - accuracy: 0.9612 - val_loss: 0.1298 - val_accuracy: 0.9436 - 292ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.0953 - accuracy: 0.9613 - val_loss: 0.1185 - val_accuracy: 0.9502 - 289ms/epoch - 8ms/step Epoch 17/20 36/36 - 0s - loss: 0.0918 - accuracy: 0.9614 - val_loss: 0.1068 - val_accuracy: 0.9573 - 276ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.0885 - accuracy: 0.9635 - val_loss: 0.0928 - val_accuracy: 0.9634 - 273ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.0892 - accuracy: 0.9629 - val_loss: 0.0951 - val_accuracy: 0.9675 - 284ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.0890 - accuracy: 0.9633 - val_loss: 0.1032 - val_accuracy: 0.9629 - 345ms/epoch - 10ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 32 | Dropout Rate 0.7 | FilterSize 5]: 0.9629064798355103 Epoch 1/20 36/36 - 1s - loss: 0.2891 - accuracy: 0.8732 - val_loss: 0.1344 - val_accuracy: 0.9558 - 1s/epoch - 34ms/step Epoch 2/20 36/36 - 0s - loss: 0.1482 - accuracy: 0.9428 - val_loss: 0.1203 - val_accuracy: 0.9558 - 266ms/epoch - 7ms/step Epoch 3/20 36/36 - 0s - loss: 0.1289 - accuracy: 0.9487 - val_loss: 0.1051 - val_accuracy: 0.9578 - 279ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.1163 - accuracy: 0.9544 - val_loss: 0.1030 - val_accuracy: 0.9654 - 412ms/epoch - 11ms/step Epoch 5/20 36/36 - 0s - loss: 0.1051 - accuracy: 0.9586 - val_loss: 0.1256 - val_accuracy: 0.9477 - 435ms/epoch - 12ms/step Epoch 6/20 36/36 - 0s - loss: 0.1046 - accuracy: 0.9589 - val_loss: 0.1040 - val_accuracy: 0.9573 - 441ms/epoch - 12ms/step Epoch 7/20 36/36 - 0s - loss: 0.1068 - accuracy: 0.9579 - val_loss: 0.1203 - val_accuracy: 0.9466 - 437ms/epoch - 12ms/step Epoch 8/20 36/36 - 0s - loss: 0.0965 - accuracy: 0.9618 - val_loss: 0.1093 - val_accuracy: 0.9553 - 476ms/epoch - 13ms/step Epoch 9/20 36/36 - 0s - loss: 0.0930 - accuracy: 0.9641 - val_loss: 0.1033 - val_accuracy: 0.9588 - 449ms/epoch - 12ms/step Epoch 10/20 36/36 - 0s - loss: 0.0883 - accuracy: 0.9635 - val_loss: 0.1179 - val_accuracy: 0.9527 - 488ms/epoch - 14ms/step Epoch 11/20 36/36 - 0s - loss: 0.0866 - accuracy: 0.9638 - val_loss: 0.1056 - val_accuracy: 0.9609 - 493ms/epoch - 14ms/step Epoch 12/20 36/36 - 0s - loss: 0.0881 - accuracy: 0.9641 - val_loss: 0.1070 - val_accuracy: 0.9604 - 483ms/epoch - 13ms/step Epoch 13/20 36/36 - 0s - loss: 0.0899 - accuracy: 0.9644 - val_loss: 0.0946 - val_accuracy: 0.9619 - 479ms/epoch - 13ms/step Epoch 14/20 36/36 - 0s - loss: 0.0825 - accuracy: 0.9684 - val_loss: 0.0996 - val_accuracy: 0.9639 - 304ms/epoch - 8ms/step Epoch 15/20 36/36 - 0s - loss: 0.0850 - accuracy: 0.9626 - val_loss: 0.1194 - val_accuracy: 0.9568 - 293ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.0826 - accuracy: 0.9651 - val_loss: 0.1029 - val_accuracy: 0.9578 - 278ms/epoch - 8ms/step Epoch 17/20 36/36 - 0s - loss: 0.0819 - accuracy: 0.9664 - val_loss: 0.1333 - val_accuracy: 0.9461 - 264ms/epoch - 7ms/step Epoch 18/20 36/36 - 0s - loss: 0.0881 - accuracy: 0.9639 - val_loss: 0.1021 - val_accuracy: 0.9527 - 270ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.0818 - accuracy: 0.9645 - val_loss: 0.1058 - val_accuracy: 0.9517 - 297ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.0819 - accuracy: 0.9666 - val_loss: 0.1089 - val_accuracy: 0.9583 - 289ms/epoch - 8ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 32 | Dropout Rate 0.7 | FilterSize 7]: 0.9583333134651184 Epoch 1/20 36/36 - 1s - loss: 0.2336 - accuracy: 0.9016 - val_loss: 0.1320 - val_accuracy: 0.9568 - 1s/epoch - 38ms/step Epoch 2/20 36/36 - 0s - loss: 0.1308 - accuracy: 0.9479 - val_loss: 0.1168 - val_accuracy: 0.9497 - 388ms/epoch - 11ms/step Epoch 3/20 36/36 - 0s - loss: 0.1160 - accuracy: 0.9527 - val_loss: 0.0967 - val_accuracy: 0.9634 - 405ms/epoch - 11ms/step Epoch 4/20 36/36 - 0s - loss: 0.1040 - accuracy: 0.9555 - val_loss: 0.0963 - val_accuracy: 0.9685 - 416ms/epoch - 12ms/step Epoch 5/20 36/36 - 0s - loss: 0.0915 - accuracy: 0.9614 - val_loss: 0.1000 - val_accuracy: 0.9624 - 400ms/epoch - 11ms/step Epoch 6/20 36/36 - 0s - loss: 0.0880 - accuracy: 0.9622 - val_loss: 0.0851 - val_accuracy: 0.9715 - 412ms/epoch - 11ms/step Epoch 7/20 36/36 - 0s - loss: 0.0846 - accuracy: 0.9646 - val_loss: 0.0906 - val_accuracy: 0.9690 - 375ms/epoch - 10ms/step Epoch 8/20 36/36 - 0s - loss: 0.0820 - accuracy: 0.9667 - val_loss: 0.0744 - val_accuracy: 0.9756 - 455ms/epoch - 13ms/step Epoch 9/20 36/36 - 0s - loss: 0.0760 - accuracy: 0.9695 - val_loss: 0.0917 - val_accuracy: 0.9609 - 398ms/epoch - 11ms/step Epoch 10/20 36/36 - 0s - loss: 0.0714 - accuracy: 0.9692 - val_loss: 0.0847 - val_accuracy: 0.9690 - 432ms/epoch - 12ms/step Epoch 11/20 36/36 - 0s - loss: 0.0717 - accuracy: 0.9706 - val_loss: 0.0957 - val_accuracy: 0.9624 - 492ms/epoch - 14ms/step Epoch 12/20 36/36 - 1s - loss: 0.0729 - accuracy: 0.9703 - val_loss: 0.0974 - val_accuracy: 0.9634 - 632ms/epoch - 18ms/step Epoch 13/20 36/36 - 1s - loss: 0.0645 - accuracy: 0.9735 - val_loss: 0.0860 - val_accuracy: 0.9695 - 602ms/epoch - 17ms/step Epoch 14/20 36/36 - 1s - loss: 0.0619 - accuracy: 0.9747 - val_loss: 0.0951 - val_accuracy: 0.9680 - 619ms/epoch - 17ms/step Epoch 15/20 36/36 - 1s - loss: 0.0670 - accuracy: 0.9735 - val_loss: 0.0860 - val_accuracy: 0.9715 - 585ms/epoch - 16ms/step Epoch 16/20 36/36 - 1s - loss: 0.0591 - accuracy: 0.9756 - val_loss: 0.0830 - val_accuracy: 0.9695 - 611ms/epoch - 17ms/step Epoch 17/20 36/36 - 1s - loss: 0.0610 - accuracy: 0.9754 - val_loss: 0.0853 - val_accuracy: 0.9731 - 618ms/epoch - 17ms/step Epoch 18/20 36/36 - 1s - loss: 0.0558 - accuracy: 0.9776 - val_loss: 0.0915 - val_accuracy: 0.9715 - 603ms/epoch - 17ms/step Epoch 19/20 36/36 - 0s - loss: 0.0556 - accuracy: 0.9795 - val_loss: 0.0877 - val_accuracy: 0.9710 - 496ms/epoch - 14ms/step Epoch 20/20 36/36 - 0s - loss: 0.0567 - accuracy: 0.9759 - val_loss: 0.0830 - val_accuracy: 0.9736 - 426ms/epoch - 12ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 64 | Dropout Rate 0.3 | FilterSize 3]: 0.9735772609710693 Epoch 1/20 36/36 - 1s - loss: 0.2399 - accuracy: 0.8983 - val_loss: 0.1445 - val_accuracy: 0.9507 - 1s/epoch - 37ms/step Epoch 2/20 36/36 - 0s - loss: 0.1331 - accuracy: 0.9491 - val_loss: 0.1272 - val_accuracy: 0.9527 - 404ms/epoch - 11ms/step Epoch 3/20 36/36 - 0s - loss: 0.1109 - accuracy: 0.9562 - val_loss: 0.1028 - val_accuracy: 0.9660 - 413ms/epoch - 11ms/step Epoch 4/20 36/36 - 0s - loss: 0.0962 - accuracy: 0.9603 - val_loss: 0.1112 - val_accuracy: 0.9558 - 400ms/epoch - 11ms/step Epoch 5/20 36/36 - 0s - loss: 0.0952 - accuracy: 0.9614 - val_loss: 0.1122 - val_accuracy: 0.9517 - 399ms/epoch - 11ms/step Epoch 6/20 36/36 - 0s - loss: 0.0826 - accuracy: 0.9655 - val_loss: 0.1112 - val_accuracy: 0.9573 - 423ms/epoch - 12ms/step Epoch 7/20 36/36 - 0s - loss: 0.0846 - accuracy: 0.9655 - val_loss: 0.1061 - val_accuracy: 0.9583 - 361ms/epoch - 10ms/step Epoch 8/20 36/36 - 0s - loss: 0.0727 - accuracy: 0.9697 - val_loss: 0.0889 - val_accuracy: 0.9644 - 401ms/epoch - 11ms/step Epoch 9/20 36/36 - 0s - loss: 0.0760 - accuracy: 0.9688 - val_loss: 0.1095 - val_accuracy: 0.9548 - 418ms/epoch - 12ms/step Epoch 10/20 36/36 - 0s - loss: 0.0685 - accuracy: 0.9730 - val_loss: 0.0959 - val_accuracy: 0.9593 - 411ms/epoch - 11ms/step Epoch 11/20 36/36 - 0s - loss: 0.0675 - accuracy: 0.9734 - val_loss: 0.0949 - val_accuracy: 0.9675 - 408ms/epoch - 11ms/step Epoch 12/20 36/36 - 0s - loss: 0.0693 - accuracy: 0.9731 - val_loss: 0.1059 - val_accuracy: 0.9568 - 405ms/epoch - 11ms/step Epoch 13/20 36/36 - 0s - loss: 0.0654 - accuracy: 0.9719 - val_loss: 0.0926 - val_accuracy: 0.9649 - 377ms/epoch - 10ms/step Epoch 14/20 36/36 - 0s - loss: 0.0586 - accuracy: 0.9759 - val_loss: 0.0929 - val_accuracy: 0.9700 - 409ms/epoch - 11ms/step Epoch 15/20 36/36 - 0s - loss: 0.0556 - accuracy: 0.9780 - val_loss: 0.0989 - val_accuracy: 0.9721 - 372ms/epoch - 10ms/step Epoch 16/20 36/36 - 0s - loss: 0.0571 - accuracy: 0.9779 - val_loss: 0.0854 - val_accuracy: 0.9731 - 416ms/epoch - 12ms/step Epoch 17/20 36/36 - 0s - loss: 0.0551 - accuracy: 0.9795 - val_loss: 0.1098 - val_accuracy: 0.9604 - 370ms/epoch - 10ms/step Epoch 18/20 36/36 - 0s - loss: 0.0599 - accuracy: 0.9764 - val_loss: 0.0898 - val_accuracy: 0.9721 - 368ms/epoch - 10ms/step Epoch 19/20 36/36 - 0s - loss: 0.0552 - accuracy: 0.9774 - val_loss: 0.0812 - val_accuracy: 0.9776 - 366ms/epoch - 10ms/step Epoch 20/20 36/36 - 0s - loss: 0.0531 - accuracy: 0.9785 - val_loss: 0.1004 - val_accuracy: 0.9644 - 397ms/epoch - 11ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 64 | Dropout Rate 0.3 | FilterSize 5]: 0.9644308686256409 Epoch 1/20 36/36 - 2s - loss: 0.2289 - accuracy: 0.9029 - val_loss: 0.1176 - val_accuracy: 0.9563 - 2s/epoch - 44ms/step Epoch 2/20 36/36 - 0s - loss: 0.1315 - accuracy: 0.9476 - val_loss: 0.1190 - val_accuracy: 0.9548 - 353ms/epoch - 10ms/step Epoch 3/20 36/36 - 0s - loss: 0.1086 - accuracy: 0.9554 - val_loss: 0.1369 - val_accuracy: 0.9482 - 389ms/epoch - 11ms/step Epoch 4/20 36/36 - 0s - loss: 0.0913 - accuracy: 0.9632 - val_loss: 0.0902 - val_accuracy: 0.9665 - 416ms/epoch - 12ms/step Epoch 5/20 36/36 - 0s - loss: 0.0871 - accuracy: 0.9647 - val_loss: 0.1104 - val_accuracy: 0.9604 - 351ms/epoch - 10ms/step Epoch 6/20 36/36 - 0s - loss: 0.0788 - accuracy: 0.9661 - val_loss: 0.0975 - val_accuracy: 0.9614 - 377ms/epoch - 10ms/step Epoch 7/20 36/36 - 0s - loss: 0.0744 - accuracy: 0.9694 - val_loss: 0.0974 - val_accuracy: 0.9639 - 402ms/epoch - 11ms/step Epoch 8/20 36/36 - 0s - loss: 0.0779 - accuracy: 0.9693 - val_loss: 0.0836 - val_accuracy: 0.9721 - 338ms/epoch - 9ms/step Epoch 9/20 36/36 - 0s - loss: 0.0728 - accuracy: 0.9710 - val_loss: 0.0874 - val_accuracy: 0.9695 - 411ms/epoch - 11ms/step Epoch 10/20 36/36 - 0s - loss: 0.0651 - accuracy: 0.9734 - val_loss: 0.0805 - val_accuracy: 0.9766 - 390ms/epoch - 11ms/step Epoch 11/20 36/36 - 0s - loss: 0.0638 - accuracy: 0.9750 - val_loss: 0.0944 - val_accuracy: 0.9715 - 405ms/epoch - 11ms/step Epoch 12/20 36/36 - 0s - loss: 0.0625 - accuracy: 0.9743 - val_loss: 0.0873 - val_accuracy: 0.9746 - 400ms/epoch - 11ms/step Epoch 13/20 36/36 - 0s - loss: 0.0621 - accuracy: 0.9759 - val_loss: 0.0848 - val_accuracy: 0.9731 - 345ms/epoch - 10ms/step Epoch 14/20 36/36 - 0s - loss: 0.0551 - accuracy: 0.9781 - val_loss: 0.1147 - val_accuracy: 0.9563 - 398ms/epoch - 11ms/step Epoch 15/20 36/36 - 0s - loss: 0.0621 - accuracy: 0.9759 - val_loss: 0.0878 - val_accuracy: 0.9776 - 351ms/epoch - 10ms/step Epoch 16/20 36/36 - 0s - loss: 0.0607 - accuracy: 0.9763 - val_loss: 0.0899 - val_accuracy: 0.9700 - 381ms/epoch - 11ms/step Epoch 17/20 36/36 - 0s - loss: 0.0537 - accuracy: 0.9796 - val_loss: 0.0850 - val_accuracy: 0.9736 - 408ms/epoch - 11ms/step Epoch 18/20 36/36 - 0s - loss: 0.0489 - accuracy: 0.9823 - val_loss: 0.0908 - val_accuracy: 0.9695 - 356ms/epoch - 10ms/step Epoch 19/20 36/36 - 0s - loss: 0.0464 - accuracy: 0.9828 - val_loss: 0.1069 - val_accuracy: 0.9695 - 403ms/epoch - 11ms/step Epoch 20/20 36/36 - 0s - loss: 0.0480 - accuracy: 0.9815 - val_loss: 0.0972 - val_accuracy: 0.9639 - 406ms/epoch - 11ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 64 | Dropout Rate 0.3 | FilterSize 7]: 0.9639227390289307 Epoch 1/20 36/36 - 2s - loss: 0.2386 - accuracy: 0.9030 - val_loss: 0.1614 - val_accuracy: 0.9436 - 2s/epoch - 56ms/step Epoch 2/20 36/36 - 1s - loss: 0.1429 - accuracy: 0.9473 - val_loss: 0.1444 - val_accuracy: 0.9421 - 627ms/epoch - 17ms/step Epoch 3/20 36/36 - 1s - loss: 0.1279 - accuracy: 0.9512 - val_loss: 0.1298 - val_accuracy: 0.9472 - 647ms/epoch - 18ms/step Epoch 4/20 36/36 - 1s - loss: 0.1208 - accuracy: 0.9522 - val_loss: 0.1332 - val_accuracy: 0.9365 - 521ms/epoch - 14ms/step Epoch 5/20 36/36 - 0s - loss: 0.1043 - accuracy: 0.9574 - val_loss: 0.1034 - val_accuracy: 0.9588 - 378ms/epoch - 11ms/step Epoch 6/20 36/36 - 0s - loss: 0.0982 - accuracy: 0.9613 - val_loss: 0.0956 - val_accuracy: 0.9609 - 389ms/epoch - 11ms/step Epoch 7/20 36/36 - 0s - loss: 0.0951 - accuracy: 0.9624 - val_loss: 0.0951 - val_accuracy: 0.9675 - 412ms/epoch - 11ms/step Epoch 8/20 36/36 - 0s - loss: 0.0854 - accuracy: 0.9640 - val_loss: 0.0864 - val_accuracy: 0.9721 - 426ms/epoch - 12ms/step Epoch 9/20 36/36 - 0s - loss: 0.0885 - accuracy: 0.9654 - val_loss: 0.0845 - val_accuracy: 0.9715 - 406ms/epoch - 11ms/step Epoch 10/20 36/36 - 0s - loss: 0.0851 - accuracy: 0.9648 - val_loss: 0.0932 - val_accuracy: 0.9665 - 408ms/epoch - 11ms/step Epoch 11/20 36/36 - 0s - loss: 0.0777 - accuracy: 0.9678 - val_loss: 0.0898 - val_accuracy: 0.9654 - 397ms/epoch - 11ms/step Epoch 12/20 36/36 - 0s - loss: 0.0790 - accuracy: 0.9680 - val_loss: 0.0924 - val_accuracy: 0.9721 - 373ms/epoch - 10ms/step Epoch 13/20 36/36 - 0s - loss: 0.0817 - accuracy: 0.9672 - val_loss: 0.0994 - val_accuracy: 0.9578 - 401ms/epoch - 11ms/step Epoch 14/20 36/36 - 0s - loss: 0.0743 - accuracy: 0.9698 - val_loss: 0.0809 - val_accuracy: 0.9705 - 380ms/epoch - 11ms/step Epoch 15/20 36/36 - 0s - loss: 0.0817 - accuracy: 0.9657 - val_loss: 0.0870 - val_accuracy: 0.9726 - 387ms/epoch - 11ms/step Epoch 16/20 36/36 - 0s - loss: 0.0785 - accuracy: 0.9688 - val_loss: 0.0900 - val_accuracy: 0.9690 - 394ms/epoch - 11ms/step Epoch 17/20 36/36 - 0s - loss: 0.0729 - accuracy: 0.9704 - val_loss: 0.0879 - val_accuracy: 0.9685 - 371ms/epoch - 10ms/step Epoch 18/20 36/36 - 0s - loss: 0.0770 - accuracy: 0.9673 - val_loss: 0.0925 - val_accuracy: 0.9685 - 421ms/epoch - 12ms/step Epoch 19/20 36/36 - 0s - loss: 0.0738 - accuracy: 0.9690 - val_loss: 0.1002 - val_accuracy: 0.9649 - 372ms/epoch - 10ms/step Epoch 20/20 36/36 - 0s - loss: 0.0714 - accuracy: 0.9720 - val_loss: 0.1065 - val_accuracy: 0.9578 - 376ms/epoch - 10ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 64 | Dropout Rate 0.5 | FilterSize 3]: 0.9578251838684082 Epoch 1/20 36/36 - 1s - loss: 0.2399 - accuracy: 0.9021 - val_loss: 0.1286 - val_accuracy: 0.9548 - 1s/epoch - 38ms/step Epoch 2/20 36/36 - 0s - loss: 0.1401 - accuracy: 0.9453 - val_loss: 0.1066 - val_accuracy: 0.9619 - 414ms/epoch - 12ms/step Epoch 3/20 36/36 - 0s - loss: 0.1164 - accuracy: 0.9539 - val_loss: 0.1253 - val_accuracy: 0.9538 - 374ms/epoch - 10ms/step Epoch 4/20 36/36 - 1s - loss: 0.1095 - accuracy: 0.9563 - val_loss: 0.1182 - val_accuracy: 0.9538 - 552ms/epoch - 15ms/step Epoch 5/20 36/36 - 1s - loss: 0.0968 - accuracy: 0.9621 - val_loss: 0.1022 - val_accuracy: 0.9588 - 616ms/epoch - 17ms/step Epoch 6/20 36/36 - 1s - loss: 0.0879 - accuracy: 0.9637 - val_loss: 0.1157 - val_accuracy: 0.9553 - 573ms/epoch - 16ms/step Epoch 7/20 36/36 - 1s - loss: 0.0877 - accuracy: 0.9648 - val_loss: 0.0894 - val_accuracy: 0.9710 - 626ms/epoch - 17ms/step Epoch 8/20 36/36 - 1s - loss: 0.0857 - accuracy: 0.9658 - val_loss: 0.1186 - val_accuracy: 0.9482 - 608ms/epoch - 17ms/step Epoch 9/20 36/36 - 1s - loss: 0.0818 - accuracy: 0.9656 - val_loss: 0.0913 - val_accuracy: 0.9705 - 602ms/epoch - 17ms/step Epoch 10/20 36/36 - 1s - loss: 0.0783 - accuracy: 0.9682 - val_loss: 0.0877 - val_accuracy: 0.9685 - 639ms/epoch - 18ms/step Epoch 11/20 36/36 - 1s - loss: 0.0781 - accuracy: 0.9691 - val_loss: 0.1196 - val_accuracy: 0.9527 - 558ms/epoch - 15ms/step Epoch 12/20 36/36 - 0s - loss: 0.0781 - accuracy: 0.9690 - val_loss: 0.1123 - val_accuracy: 0.9588 - 404ms/epoch - 11ms/step Epoch 13/20 36/36 - 0s - loss: 0.0810 - accuracy: 0.9679 - val_loss: 0.0897 - val_accuracy: 0.9705 - 368ms/epoch - 10ms/step Epoch 14/20 36/36 - 0s - loss: 0.0720 - accuracy: 0.9689 - val_loss: 0.1019 - val_accuracy: 0.9614 - 370ms/epoch - 10ms/step Epoch 15/20 36/36 - 0s - loss: 0.0752 - accuracy: 0.9699 - val_loss: 0.0954 - val_accuracy: 0.9670 - 378ms/epoch - 10ms/step Epoch 16/20 36/36 - 0s - loss: 0.0763 - accuracy: 0.9693 - val_loss: 0.0907 - val_accuracy: 0.9665 - 404ms/epoch - 11ms/step Epoch 17/20 36/36 - 0s - loss: 0.0753 - accuracy: 0.9708 - val_loss: 0.0951 - val_accuracy: 0.9629 - 363ms/epoch - 10ms/step Epoch 18/20 36/36 - 0s - loss: 0.0697 - accuracy: 0.9724 - val_loss: 0.0955 - val_accuracy: 0.9675 - 397ms/epoch - 11ms/step Epoch 19/20 36/36 - 0s - loss: 0.0707 - accuracy: 0.9729 - val_loss: 0.0956 - val_accuracy: 0.9741 - 364ms/epoch - 10ms/step Epoch 20/20 36/36 - 0s - loss: 0.0713 - accuracy: 0.9723 - val_loss: 0.0901 - val_accuracy: 0.9680 - 401ms/epoch - 11ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 64 | Dropout Rate 0.5 | FilterSize 5]: 0.9679877758026123 Epoch 1/20 36/36 - 1s - loss: 0.2710 - accuracy: 0.8802 - val_loss: 0.1933 - val_accuracy: 0.9278 - 1s/epoch - 38ms/step Epoch 2/20 36/36 - 0s - loss: 0.1453 - accuracy: 0.9436 - val_loss: 0.1048 - val_accuracy: 0.9593 - 402ms/epoch - 11ms/step Epoch 3/20 36/36 - 0s - loss: 0.1149 - accuracy: 0.9561 - val_loss: 0.0999 - val_accuracy: 0.9654 - 337ms/epoch - 9ms/step Epoch 4/20 36/36 - 0s - loss: 0.1091 - accuracy: 0.9563 - val_loss: 0.1052 - val_accuracy: 0.9609 - 362ms/epoch - 10ms/step Epoch 5/20 36/36 - 0s - loss: 0.0990 - accuracy: 0.9605 - val_loss: 0.1407 - val_accuracy: 0.9482 - 342ms/epoch - 9ms/step Epoch 6/20 36/36 - 0s - loss: 0.0876 - accuracy: 0.9645 - val_loss: 0.0935 - val_accuracy: 0.9639 - 338ms/epoch - 9ms/step Epoch 7/20 36/36 - 0s - loss: 0.0861 - accuracy: 0.9641 - val_loss: 0.0926 - val_accuracy: 0.9690 - 358ms/epoch - 10ms/step Epoch 8/20 36/36 - 0s - loss: 0.0790 - accuracy: 0.9677 - val_loss: 0.1067 - val_accuracy: 0.9593 - 389ms/epoch - 11ms/step Epoch 9/20 36/36 - 0s - loss: 0.0837 - accuracy: 0.9663 - val_loss: 0.1262 - val_accuracy: 0.9441 - 397ms/epoch - 11ms/step Epoch 10/20 36/36 - 0s - loss: 0.0793 - accuracy: 0.9687 - val_loss: 0.0873 - val_accuracy: 0.9710 - 395ms/epoch - 11ms/step Epoch 11/20 36/36 - 0s - loss: 0.0744 - accuracy: 0.9700 - val_loss: 0.0881 - val_accuracy: 0.9690 - 386ms/epoch - 11ms/step Epoch 12/20 36/36 - 0s - loss: 0.0708 - accuracy: 0.9716 - val_loss: 0.1126 - val_accuracy: 0.9502 - 369ms/epoch - 10ms/step Epoch 13/20 36/36 - 0s - loss: 0.0697 - accuracy: 0.9711 - val_loss: 0.1000 - val_accuracy: 0.9649 - 451ms/epoch - 13ms/step Epoch 14/20 36/36 - 1s - loss: 0.0677 - accuracy: 0.9729 - val_loss: 0.0994 - val_accuracy: 0.9660 - 582ms/epoch - 16ms/step Epoch 15/20 36/36 - 1s - loss: 0.0646 - accuracy: 0.9726 - val_loss: 0.1017 - val_accuracy: 0.9670 - 590ms/epoch - 16ms/step Epoch 16/20 36/36 - 1s - loss: 0.0668 - accuracy: 0.9730 - val_loss: 0.0978 - val_accuracy: 0.9690 - 585ms/epoch - 16ms/step Epoch 17/20 36/36 - 1s - loss: 0.0647 - accuracy: 0.9737 - val_loss: 0.0932 - val_accuracy: 0.9710 - 551ms/epoch - 15ms/step Epoch 18/20 36/36 - 1s - loss: 0.0625 - accuracy: 0.9746 - val_loss: 0.0874 - val_accuracy: 0.9746 - 589ms/epoch - 16ms/step Epoch 19/20 36/36 - 1s - loss: 0.0646 - accuracy: 0.9738 - val_loss: 0.0943 - val_accuracy: 0.9736 - 545ms/epoch - 15ms/step Epoch 20/20 36/36 - 1s - loss: 0.0647 - accuracy: 0.9758 - val_loss: 0.0890 - val_accuracy: 0.9726 - 567ms/epoch - 16ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 64 | Dropout Rate 0.5 | FilterSize 7]: 0.9725610017776489 Epoch 1/20 36/36 - 2s - loss: 0.2612 - accuracy: 0.8930 - val_loss: 0.1490 - val_accuracy: 0.9517 - 2s/epoch - 68ms/step Epoch 2/20 36/36 - 0s - loss: 0.1666 - accuracy: 0.9354 - val_loss: 0.1922 - val_accuracy: 0.9177 - 394ms/epoch - 11ms/step Epoch 3/20 36/36 - 0s - loss: 0.1498 - accuracy: 0.9432 - val_loss: 0.1357 - val_accuracy: 0.9441 - 374ms/epoch - 10ms/step Epoch 4/20 36/36 - 0s - loss: 0.1304 - accuracy: 0.9484 - val_loss: 0.1155 - val_accuracy: 0.9558 - 448ms/epoch - 12ms/step Epoch 5/20 36/36 - 0s - loss: 0.1297 - accuracy: 0.9471 - val_loss: 0.1521 - val_accuracy: 0.9421 - 420ms/epoch - 12ms/step Epoch 6/20 36/36 - 0s - loss: 0.1296 - accuracy: 0.9503 - val_loss: 0.1245 - val_accuracy: 0.9472 - 439ms/epoch - 12ms/step Epoch 7/20 36/36 - 0s - loss: 0.1213 - accuracy: 0.9512 - val_loss: 0.1171 - val_accuracy: 0.9522 - 411ms/epoch - 11ms/step Epoch 8/20 36/36 - 0s - loss: 0.1206 - accuracy: 0.9503 - val_loss: 0.1119 - val_accuracy: 0.9553 - 394ms/epoch - 11ms/step Epoch 9/20 36/36 - 0s - loss: 0.1094 - accuracy: 0.9560 - val_loss: 0.1349 - val_accuracy: 0.9405 - 436ms/epoch - 12ms/step Epoch 10/20 36/36 - 0s - loss: 0.1120 - accuracy: 0.9552 - val_loss: 0.1098 - val_accuracy: 0.9568 - 384ms/epoch - 11ms/step Epoch 11/20 36/36 - 0s - loss: 0.1046 - accuracy: 0.9566 - val_loss: 0.0990 - val_accuracy: 0.9634 - 446ms/epoch - 12ms/step Epoch 12/20 36/36 - 0s - loss: 0.1027 - accuracy: 0.9559 - val_loss: 0.0936 - val_accuracy: 0.9680 - 419ms/epoch - 12ms/step Epoch 13/20 36/36 - 0s - loss: 0.1052 - accuracy: 0.9551 - val_loss: 0.1104 - val_accuracy: 0.9558 - 398ms/epoch - 11ms/step Epoch 14/20 36/36 - 0s - loss: 0.1058 - accuracy: 0.9546 - val_loss: 0.1096 - val_accuracy: 0.9629 - 420ms/epoch - 12ms/step Epoch 15/20 36/36 - 0s - loss: 0.1051 - accuracy: 0.9547 - val_loss: 0.1080 - val_accuracy: 0.9543 - 429ms/epoch - 12ms/step Epoch 16/20 36/36 - 0s - loss: 0.0976 - accuracy: 0.9595 - val_loss: 0.1027 - val_accuracy: 0.9573 - 438ms/epoch - 12ms/step Epoch 17/20 36/36 - 0s - loss: 0.0993 - accuracy: 0.9594 - val_loss: 0.1005 - val_accuracy: 0.9665 - 451ms/epoch - 13ms/step Epoch 18/20 36/36 - 1s - loss: 0.0938 - accuracy: 0.9633 - val_loss: 0.0962 - val_accuracy: 0.9710 - 625ms/epoch - 17ms/step Epoch 19/20 36/36 - 1s - loss: 0.0963 - accuracy: 0.9622 - val_loss: 0.0980 - val_accuracy: 0.9670 - 608ms/epoch - 17ms/step Epoch 20/20 36/36 - 1s - loss: 0.0968 - accuracy: 0.9589 - val_loss: 0.1029 - val_accuracy: 0.9649 - 619ms/epoch - 17ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 64 | Dropout Rate 0.7 | FilterSize 3]: 0.9649389982223511 Epoch 1/20 36/36 - 2s - loss: 0.2853 - accuracy: 0.8843 - val_loss: 0.1400 - val_accuracy: 0.9502 - 2s/epoch - 58ms/step Epoch 2/20 36/36 - 1s - loss: 0.1570 - accuracy: 0.9401 - val_loss: 0.1425 - val_accuracy: 0.9492 - 500ms/epoch - 14ms/step Epoch 3/20 36/36 - 0s - loss: 0.1364 - accuracy: 0.9486 - val_loss: 0.1073 - val_accuracy: 0.9593 - 380ms/epoch - 11ms/step Epoch 4/20 36/36 - 0s - loss: 0.1230 - accuracy: 0.9514 - val_loss: 0.1119 - val_accuracy: 0.9527 - 403ms/epoch - 11ms/step Epoch 5/20 36/36 - 0s - loss: 0.1181 - accuracy: 0.9546 - val_loss: 0.1254 - val_accuracy: 0.9533 - 414ms/epoch - 11ms/step Epoch 6/20 36/36 - 0s - loss: 0.1179 - accuracy: 0.9525 - val_loss: 0.1272 - val_accuracy: 0.9497 - 422ms/epoch - 12ms/step Epoch 7/20 36/36 - 0s - loss: 0.1156 - accuracy: 0.9546 - val_loss: 0.1013 - val_accuracy: 0.9619 - 394ms/epoch - 11ms/step Epoch 8/20 36/36 - 0s - loss: 0.1065 - accuracy: 0.9564 - val_loss: 0.1108 - val_accuracy: 0.9522 - 393ms/epoch - 11ms/step Epoch 9/20 36/36 - 1s - loss: 0.1058 - accuracy: 0.9577 - val_loss: 0.0965 - val_accuracy: 0.9665 - 506ms/epoch - 14ms/step Epoch 10/20 36/36 - 1s - loss: 0.1063 - accuracy: 0.9581 - val_loss: 0.0994 - val_accuracy: 0.9593 - 634ms/epoch - 18ms/step Epoch 11/20 36/36 - 1s - loss: 0.0986 - accuracy: 0.9602 - val_loss: 0.1102 - val_accuracy: 0.9543 - 599ms/epoch - 17ms/step Epoch 12/20 36/36 - 1s - loss: 0.0967 - accuracy: 0.9619 - val_loss: 0.1079 - val_accuracy: 0.9634 - 592ms/epoch - 16ms/step Epoch 13/20 36/36 - 1s - loss: 0.0987 - accuracy: 0.9602 - val_loss: 0.1104 - val_accuracy: 0.9548 - 591ms/epoch - 16ms/step Epoch 14/20 36/36 - 1s - loss: 0.0950 - accuracy: 0.9636 - val_loss: 0.1222 - val_accuracy: 0.9466 - 585ms/epoch - 16ms/step Epoch 15/20 36/36 - 1s - loss: 0.0946 - accuracy: 0.9620 - val_loss: 0.1014 - val_accuracy: 0.9670 - 616ms/epoch - 17ms/step Epoch 16/20 36/36 - 1s - loss: 0.0894 - accuracy: 0.9635 - val_loss: 0.1115 - val_accuracy: 0.9583 - 629ms/epoch - 17ms/step Epoch 17/20 36/36 - 0s - loss: 0.0923 - accuracy: 0.9621 - val_loss: 0.1288 - val_accuracy: 0.9517 - 454ms/epoch - 13ms/step Epoch 18/20 36/36 - 0s - loss: 0.0928 - accuracy: 0.9621 - val_loss: 0.1066 - val_accuracy: 0.9624 - 390ms/epoch - 11ms/step Epoch 19/20 36/36 - 0s - loss: 0.0931 - accuracy: 0.9619 - val_loss: 0.0994 - val_accuracy: 0.9675 - 427ms/epoch - 12ms/step Epoch 20/20 36/36 - 0s - loss: 0.0926 - accuracy: 0.9626 - val_loss: 0.1030 - val_accuracy: 0.9558 - 407ms/epoch - 11ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 64 | Dropout Rate 0.7 | FilterSize 5]: 0.9557926654815674 Epoch 1/20 36/36 - 1s - loss: 0.2827 - accuracy: 0.8786 - val_loss: 0.1399 - val_accuracy: 0.9461 - 1s/epoch - 38ms/step Epoch 2/20 36/36 - 0s - loss: 0.1556 - accuracy: 0.9414 - val_loss: 0.1252 - val_accuracy: 0.9548 - 365ms/epoch - 10ms/step Epoch 3/20 36/36 - 0s - loss: 0.1317 - accuracy: 0.9508 - val_loss: 0.1115 - val_accuracy: 0.9533 - 407ms/epoch - 11ms/step Epoch 4/20 36/36 - 0s - loss: 0.1176 - accuracy: 0.9534 - val_loss: 0.1341 - val_accuracy: 0.9461 - 371ms/epoch - 10ms/step Epoch 5/20 36/36 - 0s - loss: 0.1101 - accuracy: 0.9559 - val_loss: 0.0965 - val_accuracy: 0.9654 - 348ms/epoch - 10ms/step Epoch 6/20 36/36 - 0s - loss: 0.1120 - accuracy: 0.9549 - val_loss: 0.1095 - val_accuracy: 0.9533 - 351ms/epoch - 10ms/step Epoch 7/20 36/36 - 1s - loss: 0.1113 - accuracy: 0.9551 - val_loss: 0.1225 - val_accuracy: 0.9502 - 550ms/epoch - 15ms/step Epoch 8/20 36/36 - 1s - loss: 0.0977 - accuracy: 0.9574 - val_loss: 0.1139 - val_accuracy: 0.9573 - 569ms/epoch - 16ms/step Epoch 9/20 36/36 - 1s - loss: 0.0951 - accuracy: 0.9622 - val_loss: 0.1134 - val_accuracy: 0.9558 - 562ms/epoch - 16ms/step Epoch 10/20 36/36 - 1s - loss: 0.0934 - accuracy: 0.9610 - val_loss: 0.1025 - val_accuracy: 0.9639 - 576ms/epoch - 16ms/step Epoch 11/20 36/36 - 1s - loss: 0.0947 - accuracy: 0.9606 - val_loss: 0.0921 - val_accuracy: 0.9665 - 556ms/epoch - 15ms/step Epoch 12/20 36/36 - 1s - loss: 0.0902 - accuracy: 0.9623 - val_loss: 0.0964 - val_accuracy: 0.9609 - 594ms/epoch - 17ms/step Epoch 13/20 36/36 - 1s - loss: 0.0972 - accuracy: 0.9615 - val_loss: 0.1172 - val_accuracy: 0.9527 - 582ms/epoch - 16ms/step Epoch 14/20 36/36 - 1s - loss: 0.0926 - accuracy: 0.9610 - val_loss: 0.0973 - val_accuracy: 0.9654 - 590ms/epoch - 16ms/step Epoch 15/20 36/36 - 0s - loss: 0.0879 - accuracy: 0.9633 - val_loss: 0.1032 - val_accuracy: 0.9634 - 454ms/epoch - 13ms/step Epoch 16/20 36/36 - 0s - loss: 0.0872 - accuracy: 0.9621 - val_loss: 0.1316 - val_accuracy: 0.9472 - 399ms/epoch - 11ms/step Epoch 17/20 36/36 - 0s - loss: 0.0864 - accuracy: 0.9634 - val_loss: 0.0991 - val_accuracy: 0.9649 - 379ms/epoch - 11ms/step Epoch 18/20 36/36 - 0s - loss: 0.0865 - accuracy: 0.9629 - val_loss: 0.1035 - val_accuracy: 0.9665 - 358ms/epoch - 10ms/step Epoch 19/20 36/36 - 0s - loss: 0.0785 - accuracy: 0.9673 - val_loss: 0.1032 - val_accuracy: 0.9604 - 405ms/epoch - 11ms/step Epoch 20/20 36/36 - 0s - loss: 0.0766 - accuracy: 0.9689 - val_loss: 0.0937 - val_accuracy: 0.9726 - 364ms/epoch - 10ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 64 | Dropout Rate 0.7 | FilterSize 7]: 0.9725610017776489 Epoch 1/20 36/36 - 2s - loss: 0.2372 - accuracy: 0.9027 - val_loss: 0.1287 - val_accuracy: 0.9538 - 2s/epoch - 43ms/step Epoch 2/20 36/36 - 1s - loss: 0.1331 - accuracy: 0.9494 - val_loss: 0.1159 - val_accuracy: 0.9538 - 604ms/epoch - 17ms/step Epoch 3/20 36/36 - 1s - loss: 0.1103 - accuracy: 0.9551 - val_loss: 0.1057 - val_accuracy: 0.9573 - 607ms/epoch - 17ms/step Epoch 4/20 36/36 - 1s - loss: 0.1055 - accuracy: 0.9566 - val_loss: 0.0922 - val_accuracy: 0.9639 - 599ms/epoch - 17ms/step Epoch 5/20 36/36 - 1s - loss: 0.1023 - accuracy: 0.9599 - val_loss: 0.1173 - val_accuracy: 0.9456 - 602ms/epoch - 17ms/step Epoch 6/20 36/36 - 1s - loss: 0.0895 - accuracy: 0.9637 - val_loss: 0.0956 - val_accuracy: 0.9670 - 601ms/epoch - 17ms/step Epoch 7/20 36/36 - 1s - loss: 0.0858 - accuracy: 0.9657 - val_loss: 0.1122 - val_accuracy: 0.9477 - 607ms/epoch - 17ms/step Epoch 8/20 36/36 - 1s - loss: 0.0780 - accuracy: 0.9683 - val_loss: 0.0952 - val_accuracy: 0.9634 - 610ms/epoch - 17ms/step Epoch 9/20 36/36 - 1s - loss: 0.0759 - accuracy: 0.9697 - val_loss: 0.0979 - val_accuracy: 0.9654 - 629ms/epoch - 17ms/step Epoch 10/20 36/36 - 1s - loss: 0.0745 - accuracy: 0.9704 - val_loss: 0.0784 - val_accuracy: 0.9710 - 598ms/epoch - 17ms/step Epoch 11/20 36/36 - 1s - loss: 0.0757 - accuracy: 0.9689 - val_loss: 0.0792 - val_accuracy: 0.9721 - 629ms/epoch - 17ms/step Epoch 12/20 36/36 - 1s - loss: 0.0689 - accuracy: 0.9735 - val_loss: 0.0862 - val_accuracy: 0.9715 - 976ms/epoch - 27ms/step Epoch 13/20 36/36 - 1s - loss: 0.0704 - accuracy: 0.9731 - val_loss: 0.0886 - val_accuracy: 0.9690 - 986ms/epoch - 27ms/step Epoch 14/20 36/36 - 1s - loss: 0.0662 - accuracy: 0.9741 - val_loss: 0.0896 - val_accuracy: 0.9726 - 974ms/epoch - 27ms/step Epoch 15/20 36/36 - 1s - loss: 0.0606 - accuracy: 0.9758 - val_loss: 0.0823 - val_accuracy: 0.9715 - 1s/epoch - 29ms/step Epoch 16/20 36/36 - 1s - loss: 0.0609 - accuracy: 0.9759 - val_loss: 0.0819 - val_accuracy: 0.9756 - 894ms/epoch - 25ms/step Epoch 17/20 36/36 - 1s - loss: 0.0630 - accuracy: 0.9754 - val_loss: 0.0778 - val_accuracy: 0.9766 - 614ms/epoch - 17ms/step Epoch 18/20 36/36 - 1s - loss: 0.0590 - accuracy: 0.9766 - val_loss: 0.1119 - val_accuracy: 0.9644 - 610ms/epoch - 17ms/step Epoch 19/20 36/36 - 1s - loss: 0.0579 - accuracy: 0.9770 - val_loss: 0.0893 - val_accuracy: 0.9695 - 585ms/epoch - 16ms/step Epoch 20/20 36/36 - 1s - loss: 0.0666 - accuracy: 0.9765 - val_loss: 0.1063 - val_accuracy: 0.9670 - 603ms/epoch - 17ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 128 | Dropout Rate 0.3 | FilterSize 3]: 0.9669715166091919 Epoch 1/20 36/36 - 2s - loss: 0.2426 - accuracy: 0.9019 - val_loss: 0.1421 - val_accuracy: 0.9482 - 2s/epoch - 52ms/step Epoch 2/20 36/36 - 1s - loss: 0.1333 - accuracy: 0.9467 - val_loss: 0.1080 - val_accuracy: 0.9593 - 932ms/epoch - 26ms/step Epoch 3/20 36/36 - 1s - loss: 0.1075 - accuracy: 0.9557 - val_loss: 0.0977 - val_accuracy: 0.9593 - 925ms/epoch - 26ms/step Epoch 4/20 36/36 - 1s - loss: 0.1004 - accuracy: 0.9580 - val_loss: 0.0959 - val_accuracy: 0.9680 - 949ms/epoch - 26ms/step Epoch 5/20 36/36 - 1s - loss: 0.0939 - accuracy: 0.9610 - val_loss: 0.0873 - val_accuracy: 0.9680 - 945ms/epoch - 26ms/step Epoch 6/20 36/36 - 1s - loss: 0.0871 - accuracy: 0.9660 - val_loss: 0.1071 - val_accuracy: 0.9614 - 651ms/epoch - 18ms/step Epoch 7/20 36/36 - 1s - loss: 0.0839 - accuracy: 0.9669 - val_loss: 0.0936 - val_accuracy: 0.9619 - 610ms/epoch - 17ms/step Epoch 8/20 36/36 - 1s - loss: 0.0835 - accuracy: 0.9679 - val_loss: 0.0868 - val_accuracy: 0.9700 - 563ms/epoch - 16ms/step Epoch 9/20 36/36 - 1s - loss: 0.0782 - accuracy: 0.9692 - val_loss: 0.0895 - val_accuracy: 0.9685 - 572ms/epoch - 16ms/step Epoch 10/20 36/36 - 1s - loss: 0.0665 - accuracy: 0.9748 - val_loss: 0.0923 - val_accuracy: 0.9695 - 558ms/epoch - 16ms/step Epoch 11/20 36/36 - 1s - loss: 0.0701 - accuracy: 0.9708 - val_loss: 0.0953 - val_accuracy: 0.9614 - 601ms/epoch - 17ms/step Epoch 12/20 36/36 - 1s - loss: 0.0635 - accuracy: 0.9757 - val_loss: 0.0938 - val_accuracy: 0.9726 - 571ms/epoch - 16ms/step Epoch 13/20 36/36 - 1s - loss: 0.0746 - accuracy: 0.9712 - val_loss: 0.1082 - val_accuracy: 0.9553 - 565ms/epoch - 16ms/step Epoch 14/20 36/36 - 1s - loss: 0.0711 - accuracy: 0.9716 - val_loss: 0.0996 - val_accuracy: 0.9614 - 607ms/epoch - 17ms/step Epoch 15/20 36/36 - 1s - loss: 0.0596 - accuracy: 0.9756 - val_loss: 0.0882 - val_accuracy: 0.9736 - 604ms/epoch - 17ms/step Epoch 16/20 36/36 - 1s - loss: 0.0593 - accuracy: 0.9755 - val_loss: 0.0975 - val_accuracy: 0.9588 - 578ms/epoch - 16ms/step Epoch 17/20 36/36 - 1s - loss: 0.0553 - accuracy: 0.9778 - val_loss: 0.0889 - val_accuracy: 0.9695 - 596ms/epoch - 17ms/step Epoch 18/20 36/36 - 1s - loss: 0.0573 - accuracy: 0.9748 - val_loss: 0.1097 - val_accuracy: 0.9634 - 584ms/epoch - 16ms/step Epoch 19/20 36/36 - 1s - loss: 0.0539 - accuracy: 0.9788 - val_loss: 0.0893 - val_accuracy: 0.9690 - 620ms/epoch - 17ms/step Epoch 20/20 36/36 - 1s - loss: 0.0495 - accuracy: 0.9803 - val_loss: 0.0943 - val_accuracy: 0.9746 - 612ms/epoch - 17ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 128 | Dropout Rate 0.3 | FilterSize 5]: 0.9745935201644897 Epoch 1/20 36/36 - 2s - loss: 0.2557 - accuracy: 0.8880 - val_loss: 0.1619 - val_accuracy: 0.9416 - 2s/epoch - 48ms/step Epoch 2/20 36/36 - 1s - loss: 0.1217 - accuracy: 0.9522 - val_loss: 0.1245 - val_accuracy: 0.9548 - 864ms/epoch - 24ms/step Epoch 3/20 36/36 - 1s - loss: 0.1014 - accuracy: 0.9590 - val_loss: 0.1045 - val_accuracy: 0.9578 - 874ms/epoch - 24ms/step Epoch 4/20 36/36 - 1s - loss: 0.0915 - accuracy: 0.9652 - val_loss: 0.0874 - val_accuracy: 0.9690 - 840ms/epoch - 23ms/step Epoch 5/20 36/36 - 1s - loss: 0.0881 - accuracy: 0.9634 - val_loss: 0.1068 - val_accuracy: 0.9604 - 853ms/epoch - 24ms/step Epoch 6/20 36/36 - 1s - loss: 0.0810 - accuracy: 0.9671 - val_loss: 0.0850 - val_accuracy: 0.9690 - 774ms/epoch - 21ms/step Epoch 7/20 36/36 - 1s - loss: 0.0768 - accuracy: 0.9676 - val_loss: 0.1045 - val_accuracy: 0.9660 - 541ms/epoch - 15ms/step Epoch 8/20 36/36 - 1s - loss: 0.0706 - accuracy: 0.9718 - val_loss: 0.0842 - val_accuracy: 0.9660 - 550ms/epoch - 15ms/step Epoch 9/20 36/36 - 1s - loss: 0.0740 - accuracy: 0.9705 - val_loss: 0.1141 - val_accuracy: 0.9568 - 551ms/epoch - 15ms/step Epoch 10/20 36/36 - 1s - loss: 0.0688 - accuracy: 0.9733 - val_loss: 0.0906 - val_accuracy: 0.9665 - 524ms/epoch - 15ms/step Epoch 11/20 36/36 - 1s - loss: 0.0611 - accuracy: 0.9755 - val_loss: 0.0956 - val_accuracy: 0.9629 - 571ms/epoch - 16ms/step Epoch 12/20 36/36 - 1s - loss: 0.0614 - accuracy: 0.9777 - val_loss: 0.0969 - val_accuracy: 0.9665 - 548ms/epoch - 15ms/step Epoch 13/20 36/36 - 1s - loss: 0.0590 - accuracy: 0.9763 - val_loss: 0.0905 - val_accuracy: 0.9690 - 527ms/epoch - 15ms/step Epoch 14/20 36/36 - 1s - loss: 0.0627 - accuracy: 0.9746 - val_loss: 0.0921 - val_accuracy: 0.9685 - 547ms/epoch - 15ms/step Epoch 15/20 36/36 - 1s - loss: 0.0632 - accuracy: 0.9767 - val_loss: 0.0905 - val_accuracy: 0.9700 - 534ms/epoch - 15ms/step Epoch 16/20 36/36 - 1s - loss: 0.0562 - accuracy: 0.9781 - val_loss: 0.1018 - val_accuracy: 0.9644 - 521ms/epoch - 14ms/step Epoch 17/20 36/36 - 1s - loss: 0.0529 - accuracy: 0.9793 - val_loss: 0.1020 - val_accuracy: 0.9695 - 571ms/epoch - 16ms/step Epoch 18/20 36/36 - 1s - loss: 0.0529 - accuracy: 0.9799 - val_loss: 0.1021 - val_accuracy: 0.9649 - 545ms/epoch - 15ms/step Epoch 19/20 36/36 - 1s - loss: 0.0496 - accuracy: 0.9802 - val_loss: 0.0823 - val_accuracy: 0.9761 - 560ms/epoch - 16ms/step Epoch 20/20 36/36 - 1s - loss: 0.0492 - accuracy: 0.9809 - val_loss: 0.0883 - val_accuracy: 0.9766 - 542ms/epoch - 15ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 128 | Dropout Rate 0.3 | FilterSize 7]: 0.9766260385513306 Epoch 1/20 36/36 - 2s - loss: 0.2459 - accuracy: 0.9003 - val_loss: 0.1484 - val_accuracy: 0.9477 - 2s/epoch - 44ms/step Epoch 2/20 36/36 - 1s - loss: 0.1410 - accuracy: 0.9457 - val_loss: 0.1164 - val_accuracy: 0.9538 - 974ms/epoch - 27ms/step Epoch 3/20 36/36 - 1s - loss: 0.1193 - accuracy: 0.9523 - val_loss: 0.1351 - val_accuracy: 0.9573 - 1s/epoch - 31ms/step Epoch 4/20 36/36 - 1s - loss: 0.1135 - accuracy: 0.9542 - val_loss: 0.1141 - val_accuracy: 0.9578 - 977ms/epoch - 27ms/step Epoch 5/20 36/36 - 1s - loss: 0.1060 - accuracy: 0.9572 - val_loss: 0.0983 - val_accuracy: 0.9588 - 979ms/epoch - 27ms/step Epoch 6/20 36/36 - 1s - loss: 0.0965 - accuracy: 0.9595 - val_loss: 0.1379 - val_accuracy: 0.9456 - 978ms/epoch - 27ms/step Epoch 7/20 36/36 - 1s - loss: 0.0952 - accuracy: 0.9620 - val_loss: 0.0917 - val_accuracy: 0.9629 - 661ms/epoch - 18ms/step Epoch 8/20 36/36 - 1s - loss: 0.0906 - accuracy: 0.9641 - val_loss: 0.0869 - val_accuracy: 0.9685 - 583ms/epoch - 16ms/step Epoch 9/20 36/36 - 1s - loss: 0.0834 - accuracy: 0.9658 - val_loss: 0.0929 - val_accuracy: 0.9624 - 609ms/epoch - 17ms/step Epoch 10/20 36/36 - 1s - loss: 0.0844 - accuracy: 0.9647 - val_loss: 0.1262 - val_accuracy: 0.9497 - 616ms/epoch - 17ms/step Epoch 11/20 36/36 - 1s - loss: 0.0892 - accuracy: 0.9652 - val_loss: 0.1056 - val_accuracy: 0.9563 - 596ms/epoch - 17ms/step Epoch 12/20 36/36 - 1s - loss: 0.0851 - accuracy: 0.9655 - val_loss: 0.0976 - val_accuracy: 0.9614 - 619ms/epoch - 17ms/step Epoch 13/20 36/36 - 1s - loss: 0.0820 - accuracy: 0.9675 - val_loss: 0.0967 - val_accuracy: 0.9649 - 590ms/epoch - 16ms/step Epoch 14/20 36/36 - 1s - loss: 0.0830 - accuracy: 0.9680 - val_loss: 0.0895 - val_accuracy: 0.9665 - 629ms/epoch - 17ms/step Epoch 15/20 36/36 - 1s - loss: 0.0770 - accuracy: 0.9694 - val_loss: 0.1009 - val_accuracy: 0.9665 - 673ms/epoch - 19ms/step Epoch 16/20 36/36 - 1s - loss: 0.0767 - accuracy: 0.9692 - val_loss: 0.0828 - val_accuracy: 0.9690 - 713ms/epoch - 20ms/step Epoch 17/20 36/36 - 1s - loss: 0.0752 - accuracy: 0.9699 - val_loss: 0.0955 - val_accuracy: 0.9644 - 596ms/epoch - 17ms/step Epoch 18/20 36/36 - 1s - loss: 0.0731 - accuracy: 0.9707 - val_loss: 0.0890 - val_accuracy: 0.9736 - 596ms/epoch - 17ms/step Epoch 19/20 36/36 - 1s - loss: 0.0772 - accuracy: 0.9690 - val_loss: 0.0904 - val_accuracy: 0.9731 - 592ms/epoch - 16ms/step Epoch 20/20 36/36 - 1s - loss: 0.0694 - accuracy: 0.9714 - val_loss: 0.0882 - val_accuracy: 0.9700 - 581ms/epoch - 16ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 128 | Dropout Rate 0.5 | FilterSize 3]: 0.9700203537940979 Epoch 1/20 36/36 - 2s - loss: 0.2775 - accuracy: 0.8818 - val_loss: 0.1190 - val_accuracy: 0.9563 - 2s/epoch - 45ms/step Epoch 2/20 36/36 - 1s - loss: 0.1413 - accuracy: 0.9446 - val_loss: 0.1053 - val_accuracy: 0.9604 - 585ms/epoch - 16ms/step Epoch 3/20 36/36 - 1s - loss: 0.1197 - accuracy: 0.9523 - val_loss: 0.1316 - val_accuracy: 0.9472 - 568ms/epoch - 16ms/step Epoch 4/20 36/36 - 1s - loss: 0.1084 - accuracy: 0.9550 - val_loss: 0.1168 - val_accuracy: 0.9497 - 603ms/epoch - 17ms/step Epoch 5/20 36/36 - 1s - loss: 0.0985 - accuracy: 0.9581 - val_loss: 0.1392 - val_accuracy: 0.9466 - 593ms/epoch - 16ms/step Epoch 6/20 36/36 - 1s - loss: 0.0979 - accuracy: 0.9594 - val_loss: 0.0983 - val_accuracy: 0.9629 - 590ms/epoch - 16ms/step Epoch 7/20 36/36 - 1s - loss: 0.0899 - accuracy: 0.9635 - val_loss: 0.0882 - val_accuracy: 0.9680 - 561ms/epoch - 16ms/step Epoch 8/20 36/36 - 1s - loss: 0.0877 - accuracy: 0.9642 - val_loss: 0.1048 - val_accuracy: 0.9609 - 576ms/epoch - 16ms/step Epoch 9/20 36/36 - 1s - loss: 0.0866 - accuracy: 0.9653 - val_loss: 0.1104 - val_accuracy: 0.9604 - 582ms/epoch - 16ms/step Epoch 10/20 36/36 - 1s - loss: 0.0839 - accuracy: 0.9671 - val_loss: 0.0990 - val_accuracy: 0.9578 - 595ms/epoch - 17ms/step Epoch 11/20 36/36 - 1s - loss: 0.0823 - accuracy: 0.9673 - val_loss: 0.0848 - val_accuracy: 0.9721 - 582ms/epoch - 16ms/step Epoch 12/20 36/36 - 1s - loss: 0.0765 - accuracy: 0.9694 - val_loss: 0.1105 - val_accuracy: 0.9558 - 782ms/epoch - 22ms/step Epoch 13/20 36/36 - 1s - loss: 0.0788 - accuracy: 0.9674 - val_loss: 0.1025 - val_accuracy: 0.9654 - 1s/epoch - 32ms/step Epoch 14/20 36/36 - 1s - loss: 0.0819 - accuracy: 0.9673 - val_loss: 0.0912 - val_accuracy: 0.9634 - 993ms/epoch - 28ms/step Epoch 15/20 36/36 - 1s - loss: 0.0745 - accuracy: 0.9700 - val_loss: 0.1043 - val_accuracy: 0.9660 - 925ms/epoch - 26ms/step Epoch 16/20 36/36 - 1s - loss: 0.0780 - accuracy: 0.9691 - val_loss: 0.0898 - val_accuracy: 0.9741 - 926ms/epoch - 26ms/step Epoch 17/20 36/36 - 1s - loss: 0.0751 - accuracy: 0.9660 - val_loss: 0.0935 - val_accuracy: 0.9619 - 756ms/epoch - 21ms/step Epoch 18/20 36/36 - 1s - loss: 0.0700 - accuracy: 0.9718 - val_loss: 0.1029 - val_accuracy: 0.9644 - 609ms/epoch - 17ms/step Epoch 19/20 36/36 - 1s - loss: 0.0707 - accuracy: 0.9714 - val_loss: 0.0896 - val_accuracy: 0.9705 - 572ms/epoch - 16ms/step Epoch 20/20 36/36 - 1s - loss: 0.0692 - accuracy: 0.9730 - val_loss: 0.0920 - val_accuracy: 0.9736 - 574ms/epoch - 16ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 128 | Dropout Rate 0.5 | FilterSize 5]: 0.9735772609710693 Epoch 1/20 36/36 - 2s - loss: 0.2655 - accuracy: 0.8814 - val_loss: 0.1476 - val_accuracy: 0.9421 - 2s/epoch - 51ms/step Epoch 2/20 36/36 - 1s - loss: 0.1335 - accuracy: 0.9473 - val_loss: 0.1265 - val_accuracy: 0.9533 - 872ms/epoch - 24ms/step Epoch 3/20 36/36 - 1s - loss: 0.1139 - accuracy: 0.9555 - val_loss: 0.1123 - val_accuracy: 0.9558 - 867ms/epoch - 24ms/step Epoch 4/20 36/36 - 1s - loss: 0.1039 - accuracy: 0.9584 - val_loss: 0.1167 - val_accuracy: 0.9558 - 851ms/epoch - 24ms/step Epoch 5/20 36/36 - 1s - loss: 0.0939 - accuracy: 0.9622 - val_loss: 0.1265 - val_accuracy: 0.9517 - 832ms/epoch - 23ms/step Epoch 6/20 36/36 - 1s - loss: 0.0955 - accuracy: 0.9611 - val_loss: 0.1016 - val_accuracy: 0.9604 - 848ms/epoch - 24ms/step Epoch 7/20 36/36 - 1s - loss: 0.0907 - accuracy: 0.9635 - val_loss: 0.1122 - val_accuracy: 0.9548 - 543ms/epoch - 15ms/step Epoch 8/20 36/36 - 1s - loss: 0.0903 - accuracy: 0.9644 - val_loss: 0.1113 - val_accuracy: 0.9588 - 513ms/epoch - 14ms/step Epoch 9/20 36/36 - 1s - loss: 0.0945 - accuracy: 0.9624 - val_loss: 0.1021 - val_accuracy: 0.9619 - 521ms/epoch - 14ms/step Epoch 10/20 36/36 - 1s - loss: 0.0762 - accuracy: 0.9690 - val_loss: 0.0922 - val_accuracy: 0.9731 - 507ms/epoch - 14ms/step Epoch 11/20 36/36 - 1s - loss: 0.0788 - accuracy: 0.9698 - val_loss: 0.0894 - val_accuracy: 0.9675 - 552ms/epoch - 15ms/step Epoch 12/20 36/36 - 1s - loss: 0.0753 - accuracy: 0.9693 - val_loss: 0.1070 - val_accuracy: 0.9629 - 545ms/epoch - 15ms/step Epoch 13/20 36/36 - 1s - loss: 0.0720 - accuracy: 0.9683 - val_loss: 0.1013 - val_accuracy: 0.9629 - 550ms/epoch - 15ms/step Epoch 14/20 36/36 - 1s - loss: 0.0735 - accuracy: 0.9696 - val_loss: 0.0869 - val_accuracy: 0.9665 - 508ms/epoch - 14ms/step Epoch 15/20 36/36 - 1s - loss: 0.0682 - accuracy: 0.9729 - val_loss: 0.1062 - val_accuracy: 0.9609 - 552ms/epoch - 15ms/step Epoch 16/20 36/36 - 1s - loss: 0.0707 - accuracy: 0.9704 - val_loss: 0.0930 - val_accuracy: 0.9695 - 541ms/epoch - 15ms/step Epoch 17/20 36/36 - 1s - loss: 0.0691 - accuracy: 0.9734 - val_loss: 0.1152 - val_accuracy: 0.9614 - 559ms/epoch - 16ms/step Epoch 18/20 36/36 - 1s - loss: 0.0670 - accuracy: 0.9727 - val_loss: 0.1041 - val_accuracy: 0.9634 - 511ms/epoch - 14ms/step Epoch 19/20 36/36 - 1s - loss: 0.0649 - accuracy: 0.9734 - val_loss: 0.1113 - val_accuracy: 0.9543 - 600ms/epoch - 17ms/step Epoch 20/20 36/36 - 1s - loss: 0.0611 - accuracy: 0.9724 - val_loss: 0.0896 - val_accuracy: 0.9726 - 633ms/epoch - 18ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 128 | Dropout Rate 0.5 | FilterSize 7]: 0.9725610017776489 Epoch 1/20 36/36 - 2s - loss: 0.2885 - accuracy: 0.8823 - val_loss: 0.1682 - val_accuracy: 0.9411 - 2s/epoch - 45ms/step Epoch 2/20 36/36 - 1s - loss: 0.1706 - accuracy: 0.9342 - val_loss: 0.1399 - val_accuracy: 0.9461 - 759ms/epoch - 21ms/step Epoch 3/20 36/36 - 1s - loss: 0.1459 - accuracy: 0.9422 - val_loss: 0.1242 - val_accuracy: 0.9502 - 985ms/epoch - 27ms/step Epoch 4/20 36/36 - 1s - loss: 0.1323 - accuracy: 0.9478 - val_loss: 0.1089 - val_accuracy: 0.9522 - 962ms/epoch - 27ms/step Epoch 5/20 36/36 - 1s - loss: 0.1261 - accuracy: 0.9488 - val_loss: 0.1404 - val_accuracy: 0.9380 - 981ms/epoch - 27ms/step Epoch 6/20 36/36 - 1s - loss: 0.1216 - accuracy: 0.9496 - val_loss: 0.1193 - val_accuracy: 0.9533 - 969ms/epoch - 27ms/step Epoch 7/20 36/36 - 1s - loss: 0.1183 - accuracy: 0.9506 - val_loss: 0.1367 - val_accuracy: 0.9446 - 812ms/epoch - 23ms/step Epoch 8/20 36/36 - 1s - loss: 0.1189 - accuracy: 0.9524 - val_loss: 0.1063 - val_accuracy: 0.9573 - 590ms/epoch - 16ms/step Epoch 9/20 36/36 - 1s - loss: 0.1186 - accuracy: 0.9539 - val_loss: 0.1017 - val_accuracy: 0.9639 - 600ms/epoch - 17ms/step Epoch 10/20 36/36 - 1s - loss: 0.1139 - accuracy: 0.9546 - val_loss: 0.1050 - val_accuracy: 0.9614 - 585ms/epoch - 16ms/step Epoch 11/20 36/36 - 1s - loss: 0.1079 - accuracy: 0.9532 - val_loss: 0.0915 - val_accuracy: 0.9715 - 631ms/epoch - 18ms/step Epoch 12/20 36/36 - 1s - loss: 0.1032 - accuracy: 0.9546 - val_loss: 0.1104 - val_accuracy: 0.9573 - 617ms/epoch - 17ms/step Epoch 13/20 36/36 - 1s - loss: 0.1033 - accuracy: 0.9566 - val_loss: 0.0928 - val_accuracy: 0.9670 - 726ms/epoch - 20ms/step Epoch 14/20 36/36 - 1s - loss: 0.1055 - accuracy: 0.9570 - val_loss: 0.1022 - val_accuracy: 0.9609 - 631ms/epoch - 18ms/step Epoch 15/20 36/36 - 1s - loss: 0.1005 - accuracy: 0.9574 - val_loss: 0.1105 - val_accuracy: 0.9568 - 610ms/epoch - 17ms/step Epoch 16/20 36/36 - 1s - loss: 0.0986 - accuracy: 0.9595 - val_loss: 0.1074 - val_accuracy: 0.9588 - 713ms/epoch - 20ms/step Epoch 17/20 36/36 - 1s - loss: 0.0976 - accuracy: 0.9600 - val_loss: 0.1069 - val_accuracy: 0.9578 - 650ms/epoch - 18ms/step Epoch 18/20 36/36 - 1s - loss: 0.1019 - accuracy: 0.9583 - val_loss: 0.1115 - val_accuracy: 0.9492 - 582ms/epoch - 16ms/step Epoch 19/20 36/36 - 1s - loss: 0.0987 - accuracy: 0.9595 - val_loss: 0.0995 - val_accuracy: 0.9660 - 572ms/epoch - 16ms/step Epoch 20/20 36/36 - 1s - loss: 0.0920 - accuracy: 0.9627 - val_loss: 0.1004 - val_accuracy: 0.9649 - 618ms/epoch - 17ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 128 | Dropout Rate 0.7 | FilterSize 3]: 0.9649389982223511 Epoch 1/20 36/36 - 2s - loss: 0.2836 - accuracy: 0.8759 - val_loss: 0.1337 - val_accuracy: 0.9507 - 2s/epoch - 44ms/step Epoch 2/20 36/36 - 1s - loss: 0.1589 - accuracy: 0.9408 - val_loss: 0.1227 - val_accuracy: 0.9527 - 906ms/epoch - 25ms/step Epoch 3/20 36/36 - 1s - loss: 0.1437 - accuracy: 0.9453 - val_loss: 0.1623 - val_accuracy: 0.9375 - 927ms/epoch - 26ms/step Epoch 4/20 36/36 - 1s - loss: 0.1297 - accuracy: 0.9493 - val_loss: 0.1248 - val_accuracy: 0.9507 - 918ms/epoch - 26ms/step Epoch 5/20 36/36 - 1s - loss: 0.1203 - accuracy: 0.9522 - val_loss: 0.0976 - val_accuracy: 0.9604 - 922ms/epoch - 26ms/step Epoch 6/20 36/36 - 1s - loss: 0.1212 - accuracy: 0.9531 - val_loss: 0.1026 - val_accuracy: 0.9629 - 937ms/epoch - 26ms/step Epoch 7/20 36/36 - 1s - loss: 0.1106 - accuracy: 0.9535 - val_loss: 0.1003 - val_accuracy: 0.9614 - 603ms/epoch - 17ms/step Epoch 8/20 36/36 - 1s - loss: 0.1053 - accuracy: 0.9548 - val_loss: 0.1053 - val_accuracy: 0.9599 - 588ms/epoch - 16ms/step Epoch 9/20 36/36 - 1s - loss: 0.1030 - accuracy: 0.9568 - val_loss: 0.1057 - val_accuracy: 0.9573 - 597ms/epoch - 17ms/step Epoch 10/20 36/36 - 1s - loss: 0.1006 - accuracy: 0.9593 - val_loss: 0.1014 - val_accuracy: 0.9593 - 613ms/epoch - 17ms/step Epoch 11/20 36/36 - 1s - loss: 0.1047 - accuracy: 0.9564 - val_loss: 0.0976 - val_accuracy: 0.9614 - 634ms/epoch - 18ms/step Epoch 12/20 36/36 - 1s - loss: 0.0970 - accuracy: 0.9608 - val_loss: 0.1096 - val_accuracy: 0.9573 - 765ms/epoch - 21ms/step Epoch 13/20 36/36 - 1s - loss: 0.0951 - accuracy: 0.9619 - val_loss: 0.0976 - val_accuracy: 0.9660 - 608ms/epoch - 17ms/step Epoch 14/20 36/36 - 1s - loss: 0.0943 - accuracy: 0.9598 - val_loss: 0.1077 - val_accuracy: 0.9624 - 573ms/epoch - 16ms/step Epoch 15/20 36/36 - 1s - loss: 0.0972 - accuracy: 0.9599 - val_loss: 0.1040 - val_accuracy: 0.9660 - 617ms/epoch - 17ms/step Epoch 16/20 36/36 - 1s - loss: 0.1003 - accuracy: 0.9592 - val_loss: 0.1242 - val_accuracy: 0.9487 - 572ms/epoch - 16ms/step Epoch 17/20 36/36 - 1s - loss: 0.0939 - accuracy: 0.9603 - val_loss: 0.1114 - val_accuracy: 0.9599 - 567ms/epoch - 16ms/step Epoch 18/20 36/36 - 1s - loss: 0.0871 - accuracy: 0.9625 - val_loss: 0.0978 - val_accuracy: 0.9721 - 573ms/epoch - 16ms/step Epoch 19/20 36/36 - 1s - loss: 0.0886 - accuracy: 0.9604 - val_loss: 0.0987 - val_accuracy: 0.9654 - 578ms/epoch - 16ms/step Epoch 20/20 36/36 - 1s - loss: 0.0898 - accuracy: 0.9615 - val_loss: 0.1135 - val_accuracy: 0.9527 - 576ms/epoch - 16ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 128 | Dropout Rate 0.7 | FilterSize 5]: 0.9527438879013062 Epoch 1/20 36/36 - 2s - loss: 0.2697 - accuracy: 0.8873 - val_loss: 0.1344 - val_accuracy: 0.9502 - 2s/epoch - 42ms/step Epoch 2/20 36/36 - 1s - loss: 0.1599 - accuracy: 0.9393 - val_loss: 0.1114 - val_accuracy: 0.9563 - 861ms/epoch - 24ms/step Epoch 3/20 36/36 - 1s - loss: 0.1291 - accuracy: 0.9493 - val_loss: 0.1129 - val_accuracy: 0.9512 - 849ms/epoch - 24ms/step Epoch 4/20 36/36 - 1s - loss: 0.1203 - accuracy: 0.9534 - val_loss: 0.0978 - val_accuracy: 0.9624 - 854ms/epoch - 24ms/step Epoch 5/20 36/36 - 1s - loss: 0.1181 - accuracy: 0.9540 - val_loss: 0.1098 - val_accuracy: 0.9588 - 855ms/epoch - 24ms/step Epoch 6/20 36/36 - 1s - loss: 0.1142 - accuracy: 0.9542 - val_loss: 0.1061 - val_accuracy: 0.9568 - 836ms/epoch - 23ms/step Epoch 7/20 36/36 - 1s - loss: 0.1052 - accuracy: 0.9562 - val_loss: 0.1176 - val_accuracy: 0.9522 - 704ms/epoch - 20ms/step Epoch 8/20 36/36 - 1s - loss: 0.0964 - accuracy: 0.9606 - val_loss: 0.1342 - val_accuracy: 0.9421 - 551ms/epoch - 15ms/step Epoch 9/20 36/36 - 1s - loss: 0.0944 - accuracy: 0.9606 - val_loss: 0.1118 - val_accuracy: 0.9487 - 546ms/epoch - 15ms/step Epoch 10/20 36/36 - 1s - loss: 0.0969 - accuracy: 0.9594 - val_loss: 0.1014 - val_accuracy: 0.9604 - 550ms/epoch - 15ms/step Epoch 11/20 36/36 - 1s - loss: 0.0951 - accuracy: 0.9613 - val_loss: 0.1062 - val_accuracy: 0.9548 - 500ms/epoch - 14ms/step Epoch 12/20 36/36 - 1s - loss: 0.0914 - accuracy: 0.9643 - val_loss: 0.1274 - val_accuracy: 0.9568 - 540ms/epoch - 15ms/step Epoch 13/20 36/36 - 1s - loss: 0.0939 - accuracy: 0.9580 - val_loss: 0.1040 - val_accuracy: 0.9609 - 514ms/epoch - 14ms/step Epoch 14/20 36/36 - 1s - loss: 0.0909 - accuracy: 0.9618 - val_loss: 0.1230 - val_accuracy: 0.9533 - 519ms/epoch - 14ms/step Epoch 15/20 36/36 - 1s - loss: 0.0977 - accuracy: 0.9602 - val_loss: 0.0997 - val_accuracy: 0.9568 - 531ms/epoch - 15ms/step Epoch 16/20 36/36 - 1s - loss: 0.0846 - accuracy: 0.9646 - val_loss: 0.0990 - val_accuracy: 0.9654 - 514ms/epoch - 14ms/step Epoch 17/20 36/36 - 1s - loss: 0.0823 - accuracy: 0.9645 - val_loss: 0.1208 - val_accuracy: 0.9512 - 535ms/epoch - 15ms/step Epoch 18/20 36/36 - 1s - loss: 0.0881 - accuracy: 0.9623 - val_loss: 0.1077 - val_accuracy: 0.9609 - 549ms/epoch - 15ms/step Epoch 19/20 36/36 - 1s - loss: 0.0821 - accuracy: 0.9638 - val_loss: 0.0993 - val_accuracy: 0.9614 - 500ms/epoch - 14ms/step Epoch 20/20 36/36 - 1s - loss: 0.0821 - accuracy: 0.9635 - val_loss: 0.1030 - val_accuracy: 0.9634 - 537ms/epoch - 15ms/step Validation accuracy for Model of [Learning Rate 0.01 | Num Filters 128 | Dropout Rate 0.7 | FilterSize 7]: 0.9634146094322205 Epoch 1/20 36/36 - 1s - loss: 0.8639 - accuracy: 0.7784 - val_loss: 0.2070 - val_accuracy: 0.9228 - 1s/epoch - 34ms/step Epoch 2/20 36/36 - 0s - loss: 0.1554 - accuracy: 0.9335 - val_loss: 0.1580 - val_accuracy: 0.9421 - 343ms/epoch - 10ms/step Epoch 3/20 36/36 - 0s - loss: 0.1457 - accuracy: 0.9373 - val_loss: 0.1369 - val_accuracy: 0.9395 - 322ms/epoch - 9ms/step Epoch 4/20 36/36 - 0s - loss: 0.1409 - accuracy: 0.9405 - val_loss: 0.1306 - val_accuracy: 0.9512 - 280ms/epoch - 8ms/step Epoch 5/20 36/36 - 0s - loss: 0.1304 - accuracy: 0.9423 - val_loss: 0.1192 - val_accuracy: 0.9593 - 329ms/epoch - 9ms/step Epoch 6/20 36/36 - 0s - loss: 0.1342 - accuracy: 0.9375 - val_loss: 0.1510 - val_accuracy: 0.9538 - 361ms/epoch - 10ms/step Epoch 7/20 36/36 - 0s - loss: 0.1250 - accuracy: 0.9407 - val_loss: 0.1167 - val_accuracy: 0.9375 - 429ms/epoch - 12ms/step Epoch 8/20 36/36 - 0s - loss: 0.1194 - accuracy: 0.9440 - val_loss: 0.1205 - val_accuracy: 0.9446 - 447ms/epoch - 12ms/step Epoch 9/20 36/36 - 0s - loss: 0.1194 - accuracy: 0.9447 - val_loss: 0.1068 - val_accuracy: 0.9466 - 446ms/epoch - 12ms/step Epoch 10/20 36/36 - 0s - loss: 0.1175 - accuracy: 0.9443 - val_loss: 0.1245 - val_accuracy: 0.9385 - 483ms/epoch - 13ms/step Epoch 11/20 36/36 - 0s - loss: 0.1218 - accuracy: 0.9407 - val_loss: 0.1244 - val_accuracy: 0.9405 - 452ms/epoch - 13ms/step Epoch 12/20 36/36 - 0s - loss: 0.1243 - accuracy: 0.9426 - val_loss: 0.1157 - val_accuracy: 0.9472 - 467ms/epoch - 13ms/step Epoch 13/20 36/36 - 0s - loss: 0.1102 - accuracy: 0.9520 - val_loss: 0.1088 - val_accuracy: 0.9456 - 459ms/epoch - 13ms/step Epoch 14/20 36/36 - 1s - loss: 0.1084 - accuracy: 0.9502 - val_loss: 0.1179 - val_accuracy: 0.9416 - 538ms/epoch - 15ms/step Epoch 15/20 36/36 - 1s - loss: 0.1174 - accuracy: 0.9480 - val_loss: 0.1494 - val_accuracy: 0.9345 - 577ms/epoch - 16ms/step Epoch 16/20 36/36 - 0s - loss: 0.1226 - accuracy: 0.9434 - val_loss: 0.1197 - val_accuracy: 0.9451 - 406ms/epoch - 11ms/step Epoch 17/20 36/36 - 0s - loss: 0.1368 - accuracy: 0.9327 - val_loss: 0.1455 - val_accuracy: 0.9619 - 292ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.1239 - accuracy: 0.9390 - val_loss: 0.1240 - val_accuracy: 0.9624 - 281ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.1125 - accuracy: 0.9415 - val_loss: 0.1242 - val_accuracy: 0.9588 - 298ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.1221 - accuracy: 0.9430 - val_loss: 0.1376 - val_accuracy: 0.9609 - 294ms/epoch - 8ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 32 | Dropout Rate 0.3 | FilterSize 3]: 0.9608739614486694 Epoch 1/20 36/36 - 1s - loss: 0.7538 - accuracy: 0.7979 - val_loss: 0.2086 - val_accuracy: 0.9207 - 1s/epoch - 34ms/step Epoch 2/20 36/36 - 0s - loss: 0.1819 - accuracy: 0.9264 - val_loss: 0.1304 - val_accuracy: 0.9522 - 378ms/epoch - 10ms/step Epoch 3/20 36/36 - 0s - loss: 0.1369 - accuracy: 0.9466 - val_loss: 0.1381 - val_accuracy: 0.9512 - 396ms/epoch - 11ms/step Epoch 4/20 36/36 - 0s - loss: 0.1324 - accuracy: 0.9480 - val_loss: 0.1111 - val_accuracy: 0.9578 - 322ms/epoch - 9ms/step Epoch 5/20 36/36 - 0s - loss: 0.1275 - accuracy: 0.9483 - val_loss: 0.1297 - val_accuracy: 0.9578 - 277ms/epoch - 8ms/step Epoch 6/20 36/36 - 0s - loss: 0.1315 - accuracy: 0.9459 - val_loss: 0.1320 - val_accuracy: 0.9487 - 291ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.1159 - accuracy: 0.9533 - val_loss: 0.1404 - val_accuracy: 0.9583 - 275ms/epoch - 8ms/step Epoch 8/20 36/36 - 0s - loss: 0.1143 - accuracy: 0.9539 - val_loss: 0.1097 - val_accuracy: 0.9604 - 288ms/epoch - 8ms/step Epoch 9/20 36/36 - 0s - loss: 0.1029 - accuracy: 0.9590 - val_loss: 0.1190 - val_accuracy: 0.9573 - 336ms/epoch - 9ms/step Epoch 10/20 36/36 - 0s - loss: 0.1028 - accuracy: 0.9581 - val_loss: 0.1290 - val_accuracy: 0.9538 - 288ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.1124 - accuracy: 0.9551 - val_loss: 0.1180 - val_accuracy: 0.9568 - 290ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.1017 - accuracy: 0.9589 - val_loss: 0.1239 - val_accuracy: 0.9599 - 277ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.1124 - accuracy: 0.9538 - val_loss: 0.1022 - val_accuracy: 0.9710 - 338ms/epoch - 9ms/step Epoch 14/20 36/36 - 0s - loss: 0.1118 - accuracy: 0.9555 - val_loss: 0.0941 - val_accuracy: 0.9705 - 334ms/epoch - 9ms/step Epoch 15/20 36/36 - 0s - loss: 0.1064 - accuracy: 0.9567 - val_loss: 0.1364 - val_accuracy: 0.9573 - 281ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.1018 - accuracy: 0.9580 - val_loss: 0.1020 - val_accuracy: 0.9624 - 460ms/epoch - 13ms/step Epoch 17/20 36/36 - 0s - loss: 0.0946 - accuracy: 0.9602 - val_loss: 0.1604 - val_accuracy: 0.9614 - 460ms/epoch - 13ms/step Epoch 18/20 36/36 - 1s - loss: 0.1055 - accuracy: 0.9572 - val_loss: 0.1082 - val_accuracy: 0.9639 - 539ms/epoch - 15ms/step Epoch 19/20 36/36 - 1s - loss: 0.0973 - accuracy: 0.9634 - val_loss: 0.1128 - val_accuracy: 0.9639 - 559ms/epoch - 16ms/step Epoch 20/20 36/36 - 1s - loss: 0.1026 - accuracy: 0.9564 - val_loss: 0.1391 - val_accuracy: 0.9604 - 565ms/epoch - 16ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 32 | Dropout Rate 0.3 | FilterSize 5]: 0.9603658318519592 Epoch 1/20 36/36 - 2s - loss: 1.5714 - accuracy: 0.7252 - val_loss: 0.2476 - val_accuracy: 0.9157 - 2s/epoch - 44ms/step Epoch 2/20 36/36 - 0s - loss: 0.1875 - accuracy: 0.9228 - val_loss: 0.1403 - val_accuracy: 0.9456 - 313ms/epoch - 9ms/step Epoch 3/20 36/36 - 0s - loss: 0.1493 - accuracy: 0.9348 - val_loss: 0.1575 - val_accuracy: 0.9522 - 290ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.1382 - accuracy: 0.9357 - val_loss: 0.1242 - val_accuracy: 0.9533 - 279ms/epoch - 8ms/step Epoch 5/20 36/36 - 0s - loss: 0.1246 - accuracy: 0.9383 - val_loss: 0.1277 - val_accuracy: 0.9553 - 279ms/epoch - 8ms/step Epoch 6/20 36/36 - 0s - loss: 0.1252 - accuracy: 0.9387 - val_loss: 0.1344 - val_accuracy: 0.9355 - 276ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.1251 - accuracy: 0.9373 - val_loss: 0.1572 - val_accuracy: 0.9456 - 280ms/epoch - 8ms/step Epoch 8/20 36/36 - 0s - loss: 0.1198 - accuracy: 0.9442 - val_loss: 0.1377 - val_accuracy: 0.9380 - 319ms/epoch - 9ms/step Epoch 9/20 36/36 - 0s - loss: 0.1205 - accuracy: 0.9406 - val_loss: 0.1389 - val_accuracy: 0.9278 - 319ms/epoch - 9ms/step Epoch 10/20 36/36 - 0s - loss: 0.1174 - accuracy: 0.9449 - val_loss: 0.1463 - val_accuracy: 0.9141 - 293ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.1178 - accuracy: 0.9420 - val_loss: 0.1286 - val_accuracy: 0.9426 - 267ms/epoch - 7ms/step Epoch 12/20 36/36 - 0s - loss: 0.1119 - accuracy: 0.9453 - val_loss: 0.1422 - val_accuracy: 0.9223 - 279ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.1113 - accuracy: 0.9469 - val_loss: 0.1465 - val_accuracy: 0.9299 - 326ms/epoch - 9ms/step Epoch 14/20 36/36 - 0s - loss: 0.1118 - accuracy: 0.9440 - val_loss: 0.1382 - val_accuracy: 0.9294 - 310ms/epoch - 9ms/step Epoch 15/20 36/36 - 0s - loss: 0.1106 - accuracy: 0.9441 - val_loss: 0.1470 - val_accuracy: 0.9339 - 365ms/epoch - 10ms/step Epoch 16/20 36/36 - 0s - loss: 0.1082 - accuracy: 0.9471 - val_loss: 0.1220 - val_accuracy: 0.9329 - 346ms/epoch - 10ms/step Epoch 17/20 36/36 - 0s - loss: 0.1115 - accuracy: 0.9438 - val_loss: 0.1348 - val_accuracy: 0.9314 - 284ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.1130 - accuracy: 0.9471 - val_loss: 0.1753 - val_accuracy: 0.9141 - 271ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.1293 - accuracy: 0.9396 - val_loss: 0.1496 - val_accuracy: 0.9390 - 274ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.1208 - accuracy: 0.9462 - val_loss: 0.1313 - val_accuracy: 0.9253 - 283ms/epoch - 8ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 32 | Dropout Rate 0.3 | FilterSize 7]: 0.9253048896789551 Epoch 1/20 36/36 - 2s - loss: 0.6708 - accuracy: 0.7854 - val_loss: 0.2244 - val_accuracy: 0.9106 - 2s/epoch - 52ms/step Epoch 2/20 36/36 - 0s - loss: 0.2067 - accuracy: 0.9153 - val_loss: 0.1683 - val_accuracy: 0.9370 - 482ms/epoch - 13ms/step Epoch 3/20 36/36 - 0s - loss: 0.1953 - accuracy: 0.9226 - val_loss: 0.1886 - val_accuracy: 0.9060 - 440ms/epoch - 12ms/step Epoch 4/20 36/36 - 0s - loss: 0.2123 - accuracy: 0.9150 - val_loss: 0.1951 - val_accuracy: 0.9009 - 498ms/epoch - 14ms/step Epoch 5/20 36/36 - 0s - loss: 0.1999 - accuracy: 0.9135 - val_loss: 0.1488 - val_accuracy: 0.9309 - 481ms/epoch - 13ms/step Epoch 6/20 36/36 - 0s - loss: 0.1859 - accuracy: 0.9176 - val_loss: 0.1413 - val_accuracy: 0.9309 - 328ms/epoch - 9ms/step Epoch 7/20 36/36 - 0s - loss: 0.1711 - accuracy: 0.9264 - val_loss: 0.1487 - val_accuracy: 0.9248 - 346ms/epoch - 10ms/step Epoch 8/20 36/36 - 0s - loss: 0.1714 - accuracy: 0.9211 - val_loss: 0.1512 - val_accuracy: 0.9319 - 302ms/epoch - 8ms/step Epoch 9/20 36/36 - 0s - loss: 0.1716 - accuracy: 0.9236 - val_loss: 0.1319 - val_accuracy: 0.9324 - 326ms/epoch - 9ms/step Epoch 10/20 36/36 - 0s - loss: 0.1576 - accuracy: 0.9303 - val_loss: 0.1426 - val_accuracy: 0.9324 - 300ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.1579 - accuracy: 0.9317 - val_loss: 0.1494 - val_accuracy: 0.9157 - 299ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.1517 - accuracy: 0.9350 - val_loss: 0.1568 - val_accuracy: 0.9273 - 306ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.1595 - accuracy: 0.9331 - val_loss: 0.1412 - val_accuracy: 0.9319 - 289ms/epoch - 8ms/step Epoch 14/20 36/36 - 0s - loss: 0.1549 - accuracy: 0.9338 - val_loss: 0.1307 - val_accuracy: 0.9212 - 342ms/epoch - 9ms/step Epoch 15/20 36/36 - 0s - loss: 0.1601 - accuracy: 0.9318 - val_loss: 0.1666 - val_accuracy: 0.9157 - 341ms/epoch - 9ms/step Epoch 16/20 36/36 - 0s - loss: 0.1546 - accuracy: 0.9333 - val_loss: 0.1557 - val_accuracy: 0.9319 - 294ms/epoch - 8ms/step Epoch 17/20 36/36 - 0s - loss: 0.1654 - accuracy: 0.9300 - val_loss: 0.1318 - val_accuracy: 0.9380 - 289ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.1513 - accuracy: 0.9378 - val_loss: 0.1642 - val_accuracy: 0.9101 - 349ms/epoch - 10ms/step Epoch 19/20 36/36 - 0s - loss: 0.1561 - accuracy: 0.9336 - val_loss: 0.1590 - val_accuracy: 0.9268 - 290ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.1632 - accuracy: 0.9272 - val_loss: 0.1531 - val_accuracy: 0.9202 - 339ms/epoch - 9ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 32 | Dropout Rate 0.5 | FilterSize 3]: 0.920223593711853 Epoch 1/20 36/36 - 1s - loss: 0.8832 - accuracy: 0.7802 - val_loss: 0.2590 - val_accuracy: 0.8211 - 1s/epoch - 36ms/step Epoch 2/20 36/36 - 0s - loss: 0.2042 - accuracy: 0.9099 - val_loss: 0.1808 - val_accuracy: 0.9416 - 378ms/epoch - 11ms/step Epoch 3/20 36/36 - 0s - loss: 0.2061 - accuracy: 0.9199 - val_loss: 0.1977 - val_accuracy: 0.9116 - 357ms/epoch - 10ms/step Epoch 4/20 36/36 - 0s - loss: 0.1747 - accuracy: 0.9253 - val_loss: 0.1724 - val_accuracy: 0.9380 - 401ms/epoch - 11ms/step Epoch 5/20 36/36 - 0s - loss: 0.1643 - accuracy: 0.9295 - val_loss: 0.1148 - val_accuracy: 0.9558 - 333ms/epoch - 9ms/step Epoch 6/20 36/36 - 0s - loss: 0.1685 - accuracy: 0.9301 - val_loss: 0.1779 - val_accuracy: 0.9426 - 295ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.1707 - accuracy: 0.9299 - val_loss: 0.1628 - val_accuracy: 0.9527 - 307ms/epoch - 9ms/step Epoch 8/20 36/36 - 0s - loss: 0.1649 - accuracy: 0.9318 - val_loss: 0.1434 - val_accuracy: 0.9461 - 286ms/epoch - 8ms/step Epoch 9/20 36/36 - 0s - loss: 0.1691 - accuracy: 0.9259 - val_loss: 0.1447 - val_accuracy: 0.9426 - 328ms/epoch - 9ms/step Epoch 10/20 36/36 - 0s - loss: 0.1521 - accuracy: 0.9315 - val_loss: 0.1574 - val_accuracy: 0.9421 - 284ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.1436 - accuracy: 0.9342 - val_loss: 0.1429 - val_accuracy: 0.9548 - 289ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.1510 - accuracy: 0.9297 - val_loss: 0.1615 - val_accuracy: 0.9599 - 301ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.1615 - accuracy: 0.9283 - val_loss: 0.1741 - val_accuracy: 0.9446 - 394ms/epoch - 11ms/step Epoch 14/20 36/36 - 0s - loss: 0.1493 - accuracy: 0.9334 - val_loss: 0.1331 - val_accuracy: 0.9533 - 479ms/epoch - 13ms/step Epoch 15/20 36/36 - 0s - loss: 0.1528 - accuracy: 0.9332 - val_loss: 0.1658 - val_accuracy: 0.9502 - 447ms/epoch - 12ms/step Epoch 16/20 36/36 - 1s - loss: 0.1528 - accuracy: 0.9346 - val_loss: 0.1697 - val_accuracy: 0.9497 - 521ms/epoch - 14ms/step Epoch 17/20 36/36 - 1s - loss: 0.1558 - accuracy: 0.9324 - val_loss: 0.2129 - val_accuracy: 0.9538 - 540ms/epoch - 15ms/step Epoch 18/20 36/36 - 1s - loss: 0.1733 - accuracy: 0.9288 - val_loss: 0.2084 - val_accuracy: 0.9421 - 521ms/epoch - 14ms/step Epoch 19/20 36/36 - 0s - loss: 0.1550 - accuracy: 0.9334 - val_loss: 0.1440 - val_accuracy: 0.9583 - 447ms/epoch - 12ms/step Epoch 20/20 36/36 - 0s - loss: 0.1497 - accuracy: 0.9325 - val_loss: 0.1501 - val_accuracy: 0.9522 - 486ms/epoch - 13ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 32 | Dropout Rate 0.5 | FilterSize 5]: 0.952235758304596 Epoch 1/20 36/36 - 1s - loss: 1.0778 - accuracy: 0.7635 - val_loss: 0.2120 - val_accuracy: 0.9304 - 1s/epoch - 41ms/step Epoch 2/20 36/36 - 0s - loss: 0.2071 - accuracy: 0.9092 - val_loss: 0.1484 - val_accuracy: 0.9533 - 365ms/epoch - 10ms/step Epoch 3/20 36/36 - 0s - loss: 0.1707 - accuracy: 0.9224 - val_loss: 0.1395 - val_accuracy: 0.9548 - 374ms/epoch - 10ms/step Epoch 4/20 36/36 - 0s - loss: 0.1703 - accuracy: 0.9194 - val_loss: 0.1598 - val_accuracy: 0.9538 - 347ms/epoch - 10ms/step Epoch 5/20 36/36 - 0s - loss: 0.1710 - accuracy: 0.9226 - val_loss: 0.1645 - val_accuracy: 0.9512 - 312ms/epoch - 9ms/step Epoch 6/20 36/36 - 0s - loss: 0.1605 - accuracy: 0.9278 - val_loss: 0.1293 - val_accuracy: 0.9578 - 272ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.1512 - accuracy: 0.9267 - val_loss: 0.1279 - val_accuracy: 0.9548 - 286ms/epoch - 8ms/step Epoch 8/20 36/36 - 0s - loss: 0.1400 - accuracy: 0.9318 - val_loss: 0.1352 - val_accuracy: 0.9563 - 319ms/epoch - 9ms/step Epoch 9/20 36/36 - 0s - loss: 0.1437 - accuracy: 0.9319 - val_loss: 0.1160 - val_accuracy: 0.9558 - 281ms/epoch - 8ms/step Epoch 10/20 36/36 - 0s - loss: 0.1342 - accuracy: 0.9336 - val_loss: 0.1273 - val_accuracy: 0.9654 - 279ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.1369 - accuracy: 0.9315 - val_loss: 0.1194 - val_accuracy: 0.9593 - 263ms/epoch - 7ms/step Epoch 12/20 36/36 - 0s - loss: 0.1457 - accuracy: 0.9286 - val_loss: 0.1056 - val_accuracy: 0.9629 - 326ms/epoch - 9ms/step Epoch 13/20 36/36 - 0s - loss: 0.1362 - accuracy: 0.9328 - val_loss: 0.1510 - val_accuracy: 0.9284 - 275ms/epoch - 8ms/step Epoch 14/20 36/36 - 0s - loss: 0.1284 - accuracy: 0.9360 - val_loss: 0.1484 - val_accuracy: 0.9258 - 278ms/epoch - 8ms/step Epoch 15/20 36/36 - 0s - loss: 0.1325 - accuracy: 0.9307 - val_loss: 0.1213 - val_accuracy: 0.9629 - 273ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.1332 - accuracy: 0.9318 - val_loss: 0.1927 - val_accuracy: 0.8882 - 266ms/epoch - 7ms/step Epoch 17/20 36/36 - 0s - loss: 0.1369 - accuracy: 0.9317 - val_loss: 0.1397 - val_accuracy: 0.9329 - 273ms/epoch - 8ms/step Epoch 18/20 36/36 - 0s - loss: 0.1466 - accuracy: 0.9305 - val_loss: 0.1554 - val_accuracy: 0.9157 - 267ms/epoch - 7ms/step Epoch 19/20 36/36 - 0s - loss: 0.1644 - accuracy: 0.9076 - val_loss: 0.1543 - val_accuracy: 0.9583 - 284ms/epoch - 8ms/step Epoch 20/20 36/36 - 0s - loss: 0.1570 - accuracy: 0.9191 - val_loss: 0.1358 - val_accuracy: 0.9253 - 270ms/epoch - 7ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 32 | Dropout Rate 0.5 | FilterSize 7]: 0.9253048896789551 Epoch 1/20 36/36 - 1s - loss: 0.5231 - accuracy: 0.8340 - val_loss: 0.1594 - val_accuracy: 0.9400 - 1s/epoch - 35ms/step Epoch 2/20 36/36 - 0s - loss: 0.1996 - accuracy: 0.9249 - val_loss: 0.1559 - val_accuracy: 0.9477 - 293ms/epoch - 8ms/step Epoch 3/20 36/36 - 0s - loss: 0.1958 - accuracy: 0.9255 - val_loss: 0.1410 - val_accuracy: 0.9487 - 288ms/epoch - 8ms/step Epoch 4/20 36/36 - 0s - loss: 0.1971 - accuracy: 0.9265 - val_loss: 0.1730 - val_accuracy: 0.9350 - 292ms/epoch - 8ms/step Epoch 5/20 36/36 - 0s - loss: 0.1838 - accuracy: 0.9251 - val_loss: 0.1387 - val_accuracy: 0.9482 - 290ms/epoch - 8ms/step Epoch 6/20 36/36 - 0s - loss: 0.1837 - accuracy: 0.9276 - val_loss: 0.1562 - val_accuracy: 0.9477 - 301ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.1795 - accuracy: 0.9294 - val_loss: 0.1535 - val_accuracy: 0.9446 - 296ms/epoch - 8ms/step Epoch 8/20 36/36 - 0s - loss: 0.1882 - accuracy: 0.9236 - val_loss: 0.1573 - val_accuracy: 0.9502 - 289ms/epoch - 8ms/step Epoch 9/20 36/36 - 0s - loss: 0.1924 - accuracy: 0.9234 - val_loss: 0.1722 - val_accuracy: 0.9466 - 408ms/epoch - 11ms/step Epoch 10/20 36/36 - 0s - loss: 0.1943 - accuracy: 0.9218 - val_loss: 0.1728 - val_accuracy: 0.9421 - 457ms/epoch - 13ms/step Epoch 11/20 36/36 - 0s - loss: 0.1895 - accuracy: 0.9210 - val_loss: 0.1387 - val_accuracy: 0.9578 - 463ms/epoch - 13ms/step Epoch 12/20 36/36 - 0s - loss: 0.1854 - accuracy: 0.9242 - val_loss: 0.1345 - val_accuracy: 0.9522 - 464ms/epoch - 13ms/step Epoch 13/20 36/36 - 0s - loss: 0.1898 - accuracy: 0.9234 - val_loss: 0.1563 - val_accuracy: 0.9527 - 469ms/epoch - 13ms/step Epoch 14/20 36/36 - 0s - loss: 0.1985 - accuracy: 0.9257 - val_loss: 0.1781 - val_accuracy: 0.9517 - 470ms/epoch - 13ms/step Epoch 15/20 36/36 - 0s - loss: 0.1963 - accuracy: 0.9203 - val_loss: 0.1935 - val_accuracy: 0.9426 - 463ms/epoch - 13ms/step Epoch 16/20 36/36 - 0s - loss: 0.1811 - accuracy: 0.9271 - val_loss: 0.1471 - val_accuracy: 0.9553 - 453ms/epoch - 13ms/step Epoch 17/20 36/36 - 0s - loss: 0.1925 - accuracy: 0.9232 - val_loss: 0.1527 - val_accuracy: 0.9553 - 424ms/epoch - 12ms/step Epoch 18/20 36/36 - 0s - loss: 0.2007 - accuracy: 0.9151 - val_loss: 0.1784 - val_accuracy: 0.9502 - 469ms/epoch - 13ms/step Epoch 19/20 36/36 - 0s - loss: 0.1977 - accuracy: 0.9134 - val_loss: 0.1519 - val_accuracy: 0.9558 - 406ms/epoch - 11ms/step Epoch 20/20 36/36 - 0s - loss: 0.1900 - accuracy: 0.9178 - val_loss: 0.1644 - val_accuracy: 0.9553 - 321ms/epoch - 9ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 32 | Dropout Rate 0.7 | FilterSize 3]: 0.9552845358848572 Epoch 1/20 36/36 - 1s - loss: 1.1387 - accuracy: 0.7578 - val_loss: 0.2500 - val_accuracy: 0.9360 - 1s/epoch - 36ms/step Epoch 2/20 36/36 - 0s - loss: 0.2382 - accuracy: 0.9060 - val_loss: 0.1947 - val_accuracy: 0.9436 - 366ms/epoch - 10ms/step Epoch 3/20 36/36 - 0s - loss: 0.2163 - accuracy: 0.9126 - val_loss: 0.1756 - val_accuracy: 0.9548 - 379ms/epoch - 11ms/step Epoch 4/20 36/36 - 0s - loss: 0.2066 - accuracy: 0.9143 - val_loss: 0.1884 - val_accuracy: 0.9487 - 372ms/epoch - 10ms/step Epoch 5/20 36/36 - 0s - loss: 0.2231 - accuracy: 0.9129 - val_loss: 0.1841 - val_accuracy: 0.9482 - 368ms/epoch - 10ms/step Epoch 6/20 36/36 - 0s - loss: 0.1910 - accuracy: 0.9190 - val_loss: 0.1694 - val_accuracy: 0.9512 - 326ms/epoch - 9ms/step Epoch 7/20 36/36 - 0s - loss: 0.1944 - accuracy: 0.9204 - val_loss: 0.1714 - val_accuracy: 0.9492 - 371ms/epoch - 10ms/step Epoch 8/20 36/36 - 0s - loss: 0.1928 - accuracy: 0.9180 - val_loss: 0.2279 - val_accuracy: 0.9456 - 339ms/epoch - 9ms/step Epoch 9/20 36/36 - 0s - loss: 0.2021 - accuracy: 0.9138 - val_loss: 0.1751 - val_accuracy: 0.9472 - 277ms/epoch - 8ms/step Epoch 10/20 36/36 - 0s - loss: 0.1936 - accuracy: 0.9194 - val_loss: 0.1668 - val_accuracy: 0.9583 - 274ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.1919 - accuracy: 0.9214 - val_loss: 0.1755 - val_accuracy: 0.9563 - 290ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.2016 - accuracy: 0.9156 - val_loss: 0.1712 - val_accuracy: 0.9538 - 293ms/epoch - 8ms/step Epoch 13/20 36/36 - 0s - loss: 0.2040 - accuracy: 0.9140 - val_loss: 0.1447 - val_accuracy: 0.9553 - 279ms/epoch - 8ms/step Epoch 14/20 36/36 - 0s - loss: 0.1919 - accuracy: 0.9174 - val_loss: 0.1236 - val_accuracy: 0.9644 - 288ms/epoch - 8ms/step Epoch 15/20 36/36 - 0s - loss: 0.2020 - accuracy: 0.9073 - val_loss: 0.1877 - val_accuracy: 0.9512 - 294ms/epoch - 8ms/step Epoch 16/20 36/36 - 0s - loss: 0.2132 - accuracy: 0.9089 - val_loss: 0.1752 - val_accuracy: 0.9568 - 273ms/epoch - 8ms/step Epoch 17/20 36/36 - 0s - loss: 0.2127 - accuracy: 0.9041 - val_loss: 0.1573 - val_accuracy: 0.9548 - 315ms/epoch - 9ms/step Epoch 18/20 36/36 - 0s - loss: 0.2492 - accuracy: 0.8840 - val_loss: 0.1900 - val_accuracy: 0.9497 - 277ms/epoch - 8ms/step Epoch 19/20 36/36 - 0s - loss: 0.2217 - accuracy: 0.8974 - val_loss: 0.1516 - val_accuracy: 0.9512 - 332ms/epoch - 9ms/step Epoch 20/20 36/36 - 0s - loss: 0.2095 - accuracy: 0.9016 - val_loss: 0.1543 - val_accuracy: 0.9563 - 418ms/epoch - 12ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 32 | Dropout Rate 0.7 | FilterSize 5]: 0.9563007950782776 Epoch 1/20 36/36 - 1s - loss: 0.5238 - accuracy: 0.8103 - val_loss: 0.2157 - val_accuracy: 0.9370 - 1s/epoch - 40ms/step Epoch 2/20 36/36 - 0s - loss: 0.2472 - accuracy: 0.8972 - val_loss: 0.1684 - val_accuracy: 0.9466 - 385ms/epoch - 11ms/step Epoch 3/20 36/36 - 0s - loss: 0.2168 - accuracy: 0.9030 - val_loss: 0.1769 - val_accuracy: 0.9492 - 334ms/epoch - 9ms/step Epoch 4/20 36/36 - 0s - loss: 0.2009 - accuracy: 0.9188 - val_loss: 0.1537 - val_accuracy: 0.9543 - 358ms/epoch - 10ms/step Epoch 5/20 36/36 - 0s - loss: 0.2017 - accuracy: 0.9151 - val_loss: 0.1791 - val_accuracy: 0.9573 - 297ms/epoch - 8ms/step Epoch 6/20 36/36 - 0s - loss: 0.2236 - accuracy: 0.9080 - val_loss: 0.1928 - val_accuracy: 0.9405 - 283ms/epoch - 8ms/step Epoch 7/20 36/36 - 0s - loss: 0.2394 - accuracy: 0.8977 - val_loss: 0.1392 - val_accuracy: 0.9507 - 325ms/epoch - 9ms/step Epoch 8/20 36/36 - 0s - loss: 0.2637 - accuracy: 0.8861 - val_loss: 0.1932 - val_accuracy: 0.9512 - 282ms/epoch - 8ms/step Epoch 9/20 36/36 - 0s - loss: 0.3381 - accuracy: 0.8071 - val_loss: 0.2583 - val_accuracy: 0.9370 - 283ms/epoch - 8ms/step Epoch 10/20 36/36 - 0s - loss: 0.3552 - accuracy: 0.8180 - val_loss: 0.2572 - val_accuracy: 0.9431 - 271ms/epoch - 8ms/step Epoch 11/20 36/36 - 0s - loss: 0.3085 - accuracy: 0.8230 - val_loss: 0.2316 - val_accuracy: 0.9502 - 273ms/epoch - 8ms/step Epoch 12/20 36/36 - 0s - loss: 0.3016 - accuracy: 0.8382 - val_loss: 0.2284 - val_accuracy: 0.9527 - 311ms/epoch - 9ms/step Epoch 13/20 36/36 - 0s - loss: 0.3236 - accuracy: 0.8522 - val_loss: 0.3028 - val_accuracy: 0.9482 - 383ms/epoch - 11ms/step Epoch 14/20 36/36 - 0s - loss: 0.3541 - accuracy: 0.8073 - val_loss: 0.2328 - val_accuracy: 0.7683 - 322ms/epoch - 9ms/step Epoch 15/20 36/36 - 0s - loss: 0.3220 - accuracy: 0.8007 - val_loss: 0.2738 - val_accuracy: 0.7556 - 364ms/epoch - 10ms/step Epoch 16/20 36/36 - 0s - loss: 0.2994 - accuracy: 0.8102 - val_loss: 0.2220 - val_accuracy: 0.9533 - 327ms/epoch - 9ms/step Epoch 17/20 36/36 - 0s - loss: 0.3246 - accuracy: 0.7970 - val_loss: 0.2549 - val_accuracy: 0.7713 - 329ms/epoch - 9ms/step Epoch 18/20 36/36 - 0s - loss: 0.3041 - accuracy: 0.7997 - val_loss: 0.2588 - val_accuracy: 0.8023 - 332ms/epoch - 9ms/step Epoch 19/20 36/36 - 0s - loss: 0.3381 - accuracy: 0.8029 - val_loss: 0.4064 - val_accuracy: 0.7754 - 376ms/epoch - 10ms/step Epoch 20/20 36/36 - 0s - loss: 0.3495 - accuracy: 0.8045 - val_loss: 0.3124 - val_accuracy: 0.7109 - 357ms/epoch - 10ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 32 | Dropout Rate 0.7 | FilterSize 7]: 0.7108739614486694 Epoch 1/20 36/36 - 2s - loss: 1.0119 - accuracy: 0.7992 - val_loss: 0.1688 - val_accuracy: 0.9431 - 2s/epoch - 45ms/step Epoch 2/20 36/36 - 0s - loss: 0.1716 - accuracy: 0.9310 - val_loss: 0.1423 - val_accuracy: 0.9512 - 446ms/epoch - 12ms/step Epoch 3/20 36/36 - 0s - loss: 0.1453 - accuracy: 0.9403 - val_loss: 0.1511 - val_accuracy: 0.9441 - 421ms/epoch - 12ms/step Epoch 4/20 36/36 - 0s - loss: 0.1405 - accuracy: 0.9419 - val_loss: 0.1302 - val_accuracy: 0.9426 - 420ms/epoch - 12ms/step Epoch 5/20 36/36 - 0s - loss: 0.1265 - accuracy: 0.9452 - val_loss: 0.1115 - val_accuracy: 0.9614 - 433ms/epoch - 12ms/step Epoch 6/20 36/36 - 0s - loss: 0.1262 - accuracy: 0.9486 - val_loss: 0.1110 - val_accuracy: 0.9548 - 431ms/epoch - 12ms/step Epoch 7/20 36/36 - 1s - loss: 0.1208 - accuracy: 0.9460 - val_loss: 0.1183 - val_accuracy: 0.9487 - 502ms/epoch - 14ms/step Epoch 8/20 36/36 - 0s - loss: 0.1224 - accuracy: 0.9480 - val_loss: 0.1325 - val_accuracy: 0.9441 - 484ms/epoch - 13ms/step Epoch 9/20 36/36 - 0s - loss: 0.1257 - accuracy: 0.9488 - val_loss: 0.1299 - val_accuracy: 0.9527 - 468ms/epoch - 13ms/step Epoch 10/20 36/36 - 0s - loss: 0.1305 - accuracy: 0.9466 - val_loss: 0.1218 - val_accuracy: 0.9563 - 460ms/epoch - 13ms/step Epoch 11/20 36/36 - 0s - loss: 0.1154 - accuracy: 0.9525 - val_loss: 0.1029 - val_accuracy: 0.9675 - 494ms/epoch - 14ms/step Epoch 12/20 36/36 - 0s - loss: 0.1063 - accuracy: 0.9528 - val_loss: 0.1125 - val_accuracy: 0.9619 - 454ms/epoch - 13ms/step Epoch 13/20 36/36 - 0s - loss: 0.1121 - accuracy: 0.9535 - val_loss: 0.1168 - val_accuracy: 0.9522 - 483ms/epoch - 13ms/step Epoch 14/20 36/36 - 0s - loss: 0.1274 - accuracy: 0.9527 - val_loss: 0.1152 - val_accuracy: 0.9599 - 450ms/epoch - 12ms/step Epoch 15/20 36/36 - 0s - loss: 0.1233 - accuracy: 0.9458 - val_loss: 0.1137 - val_accuracy: 0.9588 - 488ms/epoch - 14ms/step Epoch 16/20 36/36 - 0s - loss: 0.1107 - accuracy: 0.9502 - val_loss: 0.1210 - val_accuracy: 0.9609 - 483ms/epoch - 13ms/step Epoch 17/20 36/36 - 0s - loss: 0.1116 - accuracy: 0.9529 - val_loss: 0.1025 - val_accuracy: 0.9634 - 441ms/epoch - 12ms/step Epoch 18/20 36/36 - 0s - loss: 0.1034 - accuracy: 0.9537 - val_loss: 0.1061 - val_accuracy: 0.9710 - 445ms/epoch - 12ms/step Epoch 19/20 36/36 - 0s - loss: 0.1088 - accuracy: 0.9559 - val_loss: 0.1176 - val_accuracy: 0.9548 - 424ms/epoch - 12ms/step Epoch 20/20 36/36 - 0s - loss: 0.1106 - accuracy: 0.9528 - val_loss: 0.1136 - val_accuracy: 0.9588 - 496ms/epoch - 14ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 64 | Dropout Rate 0.3 | FilterSize 3]: 0.9588414430618286 Epoch 1/20 36/36 - 2s - loss: 1.3186 - accuracy: 0.7987 - val_loss: 0.1714 - val_accuracy: 0.9365 - 2s/epoch - 56ms/step Epoch 2/20 36/36 - 1s - loss: 0.1657 - accuracy: 0.9316 - val_loss: 0.1319 - val_accuracy: 0.9482 - 627ms/epoch - 17ms/step Epoch 3/20 36/36 - 1s - loss: 0.1450 - accuracy: 0.9362 - val_loss: 0.1134 - val_accuracy: 0.9538 - 698ms/epoch - 19ms/step Epoch 4/20 36/36 - 0s - loss: 0.1361 - accuracy: 0.9409 - val_loss: 0.1110 - val_accuracy: 0.9604 - 427ms/epoch - 12ms/step Epoch 5/20 36/36 - 0s - loss: 0.1254 - accuracy: 0.9483 - val_loss: 0.1668 - val_accuracy: 0.9456 - 486ms/epoch - 13ms/step Epoch 6/20 36/36 - 0s - loss: 0.1346 - accuracy: 0.9481 - val_loss: 0.1087 - val_accuracy: 0.9583 - 460ms/epoch - 13ms/step Epoch 7/20 36/36 - 0s - loss: 0.1330 - accuracy: 0.9434 - val_loss: 0.1417 - val_accuracy: 0.9472 - 423ms/epoch - 12ms/step Epoch 8/20 36/36 - 0s - loss: 0.1203 - accuracy: 0.9439 - val_loss: 0.1477 - val_accuracy: 0.9507 - 397ms/epoch - 11ms/step Epoch 9/20 36/36 - 0s - loss: 0.1289 - accuracy: 0.9399 - val_loss: 0.1545 - val_accuracy: 0.9492 - 382ms/epoch - 11ms/step Epoch 10/20 36/36 - 0s - loss: 0.1168 - accuracy: 0.9452 - val_loss: 0.1023 - val_accuracy: 0.9634 - 398ms/epoch - 11ms/step Epoch 11/20 36/36 - 0s - loss: 0.1100 - accuracy: 0.9464 - val_loss: 0.1206 - val_accuracy: 0.9563 - 372ms/epoch - 10ms/step Epoch 12/20 36/36 - 0s - loss: 0.1065 - accuracy: 0.9507 - val_loss: 0.1056 - val_accuracy: 0.9639 - 430ms/epoch - 12ms/step Epoch 13/20 36/36 - 0s - loss: 0.0997 - accuracy: 0.9534 - val_loss: 0.1393 - val_accuracy: 0.9543 - 373ms/epoch - 10ms/step Epoch 14/20 36/36 - 0s - loss: 0.1067 - accuracy: 0.9508 - val_loss: 0.1078 - val_accuracy: 0.9619 - 440ms/epoch - 12ms/step Epoch 15/20 36/36 - 0s - loss: 0.1031 - accuracy: 0.9497 - val_loss: 0.1242 - val_accuracy: 0.9563 - 489ms/epoch - 14ms/step Epoch 16/20 36/36 - 0s - loss: 0.0975 - accuracy: 0.9516 - val_loss: 0.1168 - val_accuracy: 0.9441 - 473ms/epoch - 13ms/step Epoch 17/20 36/36 - 0s - loss: 0.1023 - accuracy: 0.9506 - val_loss: 0.1185 - val_accuracy: 0.9624 - 450ms/epoch - 12ms/step Epoch 18/20 36/36 - 0s - loss: 0.1102 - accuracy: 0.9449 - val_loss: 0.1285 - val_accuracy: 0.9395 - 414ms/epoch - 11ms/step Epoch 19/20 36/36 - 0s - loss: 0.0996 - accuracy: 0.9497 - val_loss: 0.1154 - val_accuracy: 0.9461 - 382ms/epoch - 11ms/step Epoch 20/20 36/36 - 0s - loss: 0.0926 - accuracy: 0.9563 - val_loss: 0.1300 - val_accuracy: 0.9238 - 408ms/epoch - 11ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 64 | Dropout Rate 0.3 | FilterSize 5]: 0.9237805008888245 Epoch 1/20 36/36 - 1s - loss: 1.5615 - accuracy: 0.7916 - val_loss: 0.1848 - val_accuracy: 0.9304 - 1s/epoch - 39ms/step Epoch 2/20 36/36 - 0s - loss: 0.1890 - accuracy: 0.9236 - val_loss: 0.1504 - val_accuracy: 0.9345 - 418ms/epoch - 12ms/step Epoch 3/20 36/36 - 0s - loss: 0.1498 - accuracy: 0.9321 - val_loss: 0.1388 - val_accuracy: 0.9461 - 397ms/epoch - 11ms/step Epoch 4/20 36/36 - 1s - loss: 0.1380 - accuracy: 0.9382 - val_loss: 0.1488 - val_accuracy: 0.9492 - 528ms/epoch - 15ms/step Epoch 5/20 36/36 - 1s - loss: 0.1294 - accuracy: 0.9421 - val_loss: 0.1420 - val_accuracy: 0.9533 - 684ms/epoch - 19ms/step Epoch 6/20 36/36 - 1s - loss: 0.1206 - accuracy: 0.9444 - val_loss: 0.1333 - val_accuracy: 0.9533 - 721ms/epoch - 20ms/step Epoch 7/20 36/36 - 1s - loss: 0.1191 - accuracy: 0.9418 - val_loss: 0.1342 - val_accuracy: 0.9512 - 643ms/epoch - 18ms/step Epoch 8/20 36/36 - 1s - loss: 0.1127 - accuracy: 0.9464 - val_loss: 0.1182 - val_accuracy: 0.9451 - 627ms/epoch - 17ms/step Epoch 9/20 36/36 - 1s - loss: 0.1124 - accuracy: 0.9499 - val_loss: 0.1307 - val_accuracy: 0.9461 - 604ms/epoch - 17ms/step Epoch 10/20 36/36 - 1s - loss: 0.1103 - accuracy: 0.9534 - val_loss: 0.1013 - val_accuracy: 0.9619 - 758ms/epoch - 21ms/step Epoch 11/20 36/36 - 1s - loss: 0.1020 - accuracy: 0.9543 - val_loss: 0.1320 - val_accuracy: 0.9451 - 683ms/epoch - 19ms/step Epoch 12/20 36/36 - 1s - loss: 0.1063 - accuracy: 0.9514 - val_loss: 0.1082 - val_accuracy: 0.9497 - 732ms/epoch - 20ms/step Epoch 13/20 36/36 - 1s - loss: 0.1003 - accuracy: 0.9535 - val_loss: 0.1342 - val_accuracy: 0.9299 - 544ms/epoch - 15ms/step Epoch 14/20 36/36 - 0s - loss: 0.1060 - accuracy: 0.9503 - val_loss: 0.1492 - val_accuracy: 0.9395 - 442ms/epoch - 12ms/step Epoch 15/20 36/36 - 0s - loss: 0.1134 - accuracy: 0.9510 - val_loss: 0.1384 - val_accuracy: 0.9461 - 442ms/epoch - 12ms/step Epoch 16/20 36/36 - 0s - loss: 0.1116 - accuracy: 0.9483 - val_loss: 0.1423 - val_accuracy: 0.9456 - 435ms/epoch - 12ms/step Epoch 17/20 36/36 - 0s - loss: 0.0987 - accuracy: 0.9494 - val_loss: 0.1436 - val_accuracy: 0.9472 - 371ms/epoch - 10ms/step Epoch 18/20 36/36 - 0s - loss: 0.1048 - accuracy: 0.9473 - val_loss: 0.1263 - val_accuracy: 0.9426 - 337ms/epoch - 9ms/step Epoch 19/20 36/36 - 0s - loss: 0.1006 - accuracy: 0.9521 - val_loss: 0.1214 - val_accuracy: 0.9385 - 394ms/epoch - 11ms/step Epoch 20/20 36/36 - 0s - loss: 0.1042 - accuracy: 0.9492 - val_loss: 0.1277 - val_accuracy: 0.9466 - 447ms/epoch - 12ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 64 | Dropout Rate 0.3 | FilterSize 7]: 0.9466463327407837 Epoch 1/20 36/36 - 1s - loss: 0.9490 - accuracy: 0.8128 - val_loss: 0.1704 - val_accuracy: 0.9416 - 1s/epoch - 39ms/step Epoch 2/20 36/36 - 0s - loss: 0.1861 - accuracy: 0.9228 - val_loss: 0.1461 - val_accuracy: 0.9395 - 462ms/epoch - 13ms/step Epoch 3/20 36/36 - 1s - loss: 0.1580 - accuracy: 0.9333 - val_loss: 0.1557 - val_accuracy: 0.9517 - 504ms/epoch - 14ms/step Epoch 4/20 36/36 - 0s - loss: 0.1594 - accuracy: 0.9344 - val_loss: 0.1328 - val_accuracy: 0.9568 - 490ms/epoch - 14ms/step Epoch 5/20 36/36 - 0s - loss: 0.1451 - accuracy: 0.9389 - val_loss: 0.1320 - val_accuracy: 0.9543 - 494ms/epoch - 14ms/step Epoch 6/20 36/36 - 0s - loss: 0.1445 - accuracy: 0.9386 - val_loss: 0.1356 - val_accuracy: 0.9578 - 482ms/epoch - 13ms/step Epoch 7/20 36/36 - 0s - loss: 0.1394 - accuracy: 0.9368 - val_loss: 0.1578 - val_accuracy: 0.9461 - 465ms/epoch - 13ms/step Epoch 8/20 36/36 - 0s - loss: 0.1426 - accuracy: 0.9383 - val_loss: 0.1143 - val_accuracy: 0.9548 - 488ms/epoch - 14ms/step Epoch 9/20 36/36 - 1s - loss: 0.1343 - accuracy: 0.9396 - val_loss: 0.1133 - val_accuracy: 0.9644 - 522ms/epoch - 14ms/step Epoch 10/20 36/36 - 0s - loss: 0.1426 - accuracy: 0.9399 - val_loss: 0.1256 - val_accuracy: 0.9624 - 484ms/epoch - 13ms/step Epoch 11/20 36/36 - 0s - loss: 0.1453 - accuracy: 0.9405 - val_loss: 0.1375 - val_accuracy: 0.9599 - 472ms/epoch - 13ms/step Epoch 12/20 36/36 - 0s - loss: 0.1367 - accuracy: 0.9408 - val_loss: 0.1009 - val_accuracy: 0.9619 - 436ms/epoch - 12ms/step Epoch 13/20 36/36 - 1s - loss: 0.1310 - accuracy: 0.9457 - val_loss: 0.1161 - val_accuracy: 0.9553 - 630ms/epoch - 17ms/step Epoch 14/20 36/36 - 1s - loss: 0.1301 - accuracy: 0.9445 - val_loss: 0.1066 - val_accuracy: 0.9619 - 837ms/epoch - 23ms/step Epoch 15/20 36/36 - 1s - loss: 0.1442 - accuracy: 0.9406 - val_loss: 0.1307 - val_accuracy: 0.9573 - 734ms/epoch - 20ms/step Epoch 16/20 36/36 - 1s - loss: 0.1384 - accuracy: 0.9391 - val_loss: 0.1296 - val_accuracy: 0.9563 - 670ms/epoch - 19ms/step Epoch 17/20 36/36 - 1s - loss: 0.1346 - accuracy: 0.9401 - val_loss: 0.1041 - val_accuracy: 0.9654 - 644ms/epoch - 18ms/step Epoch 18/20 36/36 - 1s - loss: 0.1473 - accuracy: 0.9361 - val_loss: 0.1250 - val_accuracy: 0.9593 - 708ms/epoch - 20ms/step Epoch 19/20 36/36 - 1s - loss: 0.1373 - accuracy: 0.9410 - val_loss: 0.1173 - val_accuracy: 0.9690 - 756ms/epoch - 21ms/step Epoch 20/20 36/36 - 1s - loss: 0.1397 - accuracy: 0.9415 - val_loss: 0.1163 - val_accuracy: 0.9675 - 561ms/epoch - 16ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 64 | Dropout Rate 0.5 | FilterSize 3]: 0.9674796462059021 Epoch 1/20 36/36 - 2s - loss: 1.5163 - accuracy: 0.7174 - val_loss: 0.2448 - val_accuracy: 0.8811 - 2s/epoch - 51ms/step Epoch 2/20 36/36 - 1s - loss: 0.2152 - accuracy: 0.9113 - val_loss: 0.1467 - val_accuracy: 0.9538 - 669ms/epoch - 19ms/step Epoch 3/20 36/36 - 1s - loss: 0.1637 - accuracy: 0.9294 - val_loss: 0.1344 - val_accuracy: 0.9472 - 600ms/epoch - 17ms/step Epoch 4/20 36/36 - 1s - loss: 0.1488 - accuracy: 0.9393 - val_loss: 0.1424 - val_accuracy: 0.9507 - 585ms/epoch - 16ms/step Epoch 5/20 36/36 - 1s - loss: 0.1517 - accuracy: 0.9393 - val_loss: 0.1251 - val_accuracy: 0.9583 - 653ms/epoch - 18ms/step Epoch 6/20 36/36 - 1s - loss: 0.1407 - accuracy: 0.9410 - val_loss: 0.1078 - val_accuracy: 0.9609 - 768ms/epoch - 21ms/step Epoch 7/20 36/36 - 1s - loss: 0.1433 - accuracy: 0.9442 - val_loss: 0.1334 - val_accuracy: 0.9573 - 848ms/epoch - 24ms/step Epoch 8/20 36/36 - 1s - loss: 0.1432 - accuracy: 0.9390 - val_loss: 0.1251 - val_accuracy: 0.9578 - 590ms/epoch - 16ms/step Epoch 9/20 36/36 - 0s - loss: 0.1418 - accuracy: 0.9387 - val_loss: 0.1745 - val_accuracy: 0.9502 - 488ms/epoch - 14ms/step Epoch 10/20 36/36 - 0s - loss: 0.1322 - accuracy: 0.9432 - val_loss: 0.1123 - val_accuracy: 0.9609 - 447ms/epoch - 12ms/step Epoch 11/20 36/36 - 0s - loss: 0.1309 - accuracy: 0.9430 - val_loss: 0.1246 - val_accuracy: 0.9527 - 471ms/epoch - 13ms/step Epoch 12/20 36/36 - 0s - loss: 0.1309 - accuracy: 0.9432 - val_loss: 0.1129 - val_accuracy: 0.9624 - 398ms/epoch - 11ms/step Epoch 13/20 36/36 - 0s - loss: 0.1270 - accuracy: 0.9463 - val_loss: 0.1151 - val_accuracy: 0.9660 - 492ms/epoch - 14ms/step Epoch 14/20 36/36 - 1s - loss: 0.1194 - accuracy: 0.9471 - val_loss: 0.1194 - val_accuracy: 0.9649 - 517ms/epoch - 14ms/step Epoch 15/20 36/36 - 1s - loss: 0.1310 - accuracy: 0.9479 - val_loss: 0.1115 - val_accuracy: 0.9624 - 511ms/epoch - 14ms/step Epoch 16/20 36/36 - 0s - loss: 0.1264 - accuracy: 0.9477 - val_loss: 0.1135 - val_accuracy: 0.9649 - 493ms/epoch - 14ms/step Epoch 17/20 36/36 - 1s - loss: 0.1284 - accuracy: 0.9430 - val_loss: 0.1147 - val_accuracy: 0.9624 - 513ms/epoch - 14ms/step Epoch 18/20 36/36 - 0s - loss: 0.1203 - accuracy: 0.9456 - val_loss: 0.1309 - val_accuracy: 0.9538 - 498ms/epoch - 14ms/step Epoch 19/20 36/36 - 1s - loss: 0.1213 - accuracy: 0.9442 - val_loss: 0.1105 - val_accuracy: 0.9604 - 507ms/epoch - 14ms/step Epoch 20/20 36/36 - 1s - loss: 0.1202 - accuracy: 0.9476 - val_loss: 0.1047 - val_accuracy: 0.9675 - 512ms/epoch - 14ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 64 | Dropout Rate 0.5 | FilterSize 5]: 0.9674796462059021 Epoch 1/20 36/36 - 2s - loss: 0.8517 - accuracy: 0.7265 - val_loss: 0.1746 - val_accuracy: 0.9390 - 2s/epoch - 45ms/step Epoch 2/20 36/36 - 0s - loss: 0.2560 - accuracy: 0.8681 - val_loss: 0.1777 - val_accuracy: 0.9243 - 424ms/epoch - 12ms/step Epoch 3/20 36/36 - 0s - loss: 0.2163 - accuracy: 0.8835 - val_loss: 0.1702 - val_accuracy: 0.9441 - 449ms/epoch - 12ms/step Epoch 4/20 36/36 - 0s - loss: 0.1906 - accuracy: 0.9068 - val_loss: 0.1295 - val_accuracy: 0.9568 - 432ms/epoch - 12ms/step Epoch 5/20 36/36 - 0s - loss: 0.1821 - accuracy: 0.9127 - val_loss: 0.1159 - val_accuracy: 0.9553 - 376ms/epoch - 10ms/step Epoch 6/20 36/36 - 0s - loss: 0.1842 - accuracy: 0.9096 - val_loss: 0.1215 - val_accuracy: 0.9543 - 433ms/epoch - 12ms/step Epoch 7/20 36/36 - 0s - loss: 0.1562 - accuracy: 0.9226 - val_loss: 0.1099 - val_accuracy: 0.9568 - 399ms/epoch - 11ms/step Epoch 8/20 36/36 - 0s - loss: 0.1553 - accuracy: 0.9242 - val_loss: 0.1673 - val_accuracy: 0.9548 - 415ms/epoch - 12ms/step Epoch 9/20 36/36 - 0s - loss: 0.1708 - accuracy: 0.9231 - val_loss: 0.1173 - val_accuracy: 0.9593 - 391ms/epoch - 11ms/step Epoch 10/20 36/36 - 0s - loss: 0.1660 - accuracy: 0.9197 - val_loss: 0.1240 - val_accuracy: 0.9507 - 418ms/epoch - 12ms/step Epoch 11/20 36/36 - 0s - loss: 0.1665 - accuracy: 0.9163 - val_loss: 0.1377 - val_accuracy: 0.9558 - 400ms/epoch - 11ms/step Epoch 12/20 36/36 - 0s - loss: 0.1749 - accuracy: 0.9067 - val_loss: 0.1528 - val_accuracy: 0.9558 - 409ms/epoch - 11ms/step Epoch 13/20 36/36 - 0s - loss: 0.1626 - accuracy: 0.9157 - val_loss: 0.1176 - val_accuracy: 0.9644 - 400ms/epoch - 11ms/step Epoch 14/20 36/36 - 0s - loss: 0.1541 - accuracy: 0.9193 - val_loss: 0.1317 - val_accuracy: 0.9177 - 412ms/epoch - 11ms/step Epoch 15/20 36/36 - 0s - loss: 0.1491 - accuracy: 0.9211 - val_loss: 0.1495 - val_accuracy: 0.9096 - 455ms/epoch - 13ms/step Epoch 16/20 36/36 - 1s - loss: 0.1452 - accuracy: 0.9240 - val_loss: 0.1808 - val_accuracy: 0.9278 - 640ms/epoch - 18ms/step Epoch 17/20 36/36 - 1s - loss: 0.1863 - accuracy: 0.9045 - val_loss: 0.1649 - val_accuracy: 0.9446 - 624ms/epoch - 17ms/step Epoch 18/20 36/36 - 1s - loss: 0.1926 - accuracy: 0.9010 - val_loss: 0.1437 - val_accuracy: 0.9588 - 619ms/epoch - 17ms/step Epoch 19/20 36/36 - 1s - loss: 0.2003 - accuracy: 0.8996 - val_loss: 0.2111 - val_accuracy: 0.9614 - 613ms/epoch - 17ms/step Epoch 20/20 36/36 - 1s - loss: 0.2279 - accuracy: 0.9038 - val_loss: 0.2369 - val_accuracy: 0.9477 - 625ms/epoch - 17ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 64 | Dropout Rate 0.5 | FilterSize 7]: 0.9476625919342041 Epoch 1/20 36/36 - 2s - loss: 0.6447 - accuracy: 0.7894 - val_loss: 0.2187 - val_accuracy: 0.9309 - 2s/epoch - 51ms/step Epoch 2/20 36/36 - 0s - loss: 0.2417 - accuracy: 0.9088 - val_loss: 0.1712 - val_accuracy: 0.9411 - 384ms/epoch - 11ms/step Epoch 3/20 36/36 - 0s - loss: 0.2193 - accuracy: 0.9087 - val_loss: 0.1443 - val_accuracy: 0.9538 - 390ms/epoch - 11ms/step Epoch 4/20 36/36 - 0s - loss: 0.2039 - accuracy: 0.9208 - val_loss: 0.1751 - val_accuracy: 0.9395 - 419ms/epoch - 12ms/step Epoch 5/20 36/36 - 0s - loss: 0.1864 - accuracy: 0.9224 - val_loss: 0.1556 - val_accuracy: 0.9543 - 377ms/epoch - 10ms/step Epoch 6/20 36/36 - 0s - loss: 0.2038 - accuracy: 0.9179 - val_loss: 0.1492 - val_accuracy: 0.9558 - 448ms/epoch - 12ms/step Epoch 7/20 36/36 - 0s - loss: 0.1876 - accuracy: 0.9179 - val_loss: 0.1508 - val_accuracy: 0.9568 - 380ms/epoch - 11ms/step Epoch 8/20 36/36 - 0s - loss: 0.2010 - accuracy: 0.9169 - val_loss: 0.1648 - val_accuracy: 0.9472 - 381ms/epoch - 11ms/step Epoch 9/20 36/36 - 0s - loss: 0.1879 - accuracy: 0.9200 - val_loss: 0.1507 - val_accuracy: 0.9563 - 416ms/epoch - 12ms/step Epoch 10/20 36/36 - 0s - loss: 0.2069 - accuracy: 0.9182 - val_loss: 0.1730 - val_accuracy: 0.9527 - 412ms/epoch - 11ms/step Epoch 11/20 36/36 - 1s - loss: 0.2303 - accuracy: 0.9021 - val_loss: 0.2013 - val_accuracy: 0.9451 - 508ms/epoch - 14ms/step Epoch 12/20 36/36 - 0s - loss: 0.2441 - accuracy: 0.8993 - val_loss: 0.1758 - val_accuracy: 0.9593 - 461ms/epoch - 13ms/step Epoch 13/20 36/36 - 0s - loss: 0.2169 - accuracy: 0.9046 - val_loss: 0.1996 - val_accuracy: 0.9507 - 436ms/epoch - 12ms/step Epoch 14/20 36/36 - 0s - loss: 0.2226 - accuracy: 0.9043 - val_loss: 0.1523 - val_accuracy: 0.9553 - 453ms/epoch - 13ms/step Epoch 15/20 36/36 - 0s - loss: 0.2143 - accuracy: 0.9059 - val_loss: 0.1431 - val_accuracy: 0.9533 - 429ms/epoch - 12ms/step Epoch 16/20 36/36 - 0s - loss: 0.2061 - accuracy: 0.9074 - val_loss: 0.1512 - val_accuracy: 0.9533 - 468ms/epoch - 13ms/step Epoch 17/20 36/36 - 0s - loss: 0.1885 - accuracy: 0.9127 - val_loss: 0.1592 - val_accuracy: 0.9507 - 465ms/epoch - 13ms/step Epoch 18/20 36/36 - 1s - loss: 0.1860 - accuracy: 0.9117 - val_loss: 0.1889 - val_accuracy: 0.9446 - 513ms/epoch - 14ms/step Epoch 19/20 36/36 - 0s - loss: 0.1942 - accuracy: 0.9115 - val_loss: 0.1347 - val_accuracy: 0.9604 - 461ms/epoch - 13ms/step Epoch 20/20 36/36 - 0s - loss: 0.1936 - accuracy: 0.9122 - val_loss: 0.1524 - val_accuracy: 0.9583 - 448ms/epoch - 12ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 64 | Dropout Rate 0.7 | FilterSize 3]: 0.9583333134651184 Epoch 1/20 36/36 - 2s - loss: 1.2395 - accuracy: 0.7521 - val_loss: 0.2411 - val_accuracy: 0.9339 - 2s/epoch - 60ms/step Epoch 2/20 36/36 - 1s - loss: 0.2265 - accuracy: 0.9064 - val_loss: 0.1572 - val_accuracy: 0.9426 - 584ms/epoch - 16ms/step Epoch 3/20 36/36 - 1s - loss: 0.2050 - accuracy: 0.9155 - val_loss: 0.1756 - val_accuracy: 0.9426 - 586ms/epoch - 16ms/step Epoch 4/20 36/36 - 1s - loss: 0.1944 - accuracy: 0.9220 - val_loss: 0.1263 - val_accuracy: 0.9538 - 592ms/epoch - 16ms/step Epoch 5/20 36/36 - 1s - loss: 0.1956 - accuracy: 0.9177 - val_loss: 0.1747 - val_accuracy: 0.9533 - 506ms/epoch - 14ms/step Epoch 6/20 36/36 - 0s - loss: 0.1892 - accuracy: 0.9222 - val_loss: 0.1375 - val_accuracy: 0.9583 - 418ms/epoch - 12ms/step Epoch 7/20 36/36 - 0s - loss: 0.1819 - accuracy: 0.9232 - val_loss: 0.1273 - val_accuracy: 0.9578 - 434ms/epoch - 12ms/step Epoch 8/20 36/36 - 0s - loss: 0.1815 - accuracy: 0.9255 - val_loss: 0.1545 - val_accuracy: 0.9548 - 453ms/epoch - 13ms/step Epoch 9/20 36/36 - 0s - loss: 0.1853 - accuracy: 0.9226 - val_loss: 0.1497 - val_accuracy: 0.9558 - 374ms/epoch - 10ms/step Epoch 10/20 36/36 - 0s - loss: 0.1714 - accuracy: 0.9268 - val_loss: 0.1397 - val_accuracy: 0.9578 - 433ms/epoch - 12ms/step Epoch 11/20 36/36 - 0s - loss: 0.1669 - accuracy: 0.9295 - val_loss: 0.1939 - val_accuracy: 0.9492 - 439ms/epoch - 12ms/step Epoch 12/20 36/36 - 0s - loss: 0.1696 - accuracy: 0.9279 - val_loss: 0.1331 - val_accuracy: 0.9533 - 431ms/epoch - 12ms/step Epoch 13/20 36/36 - 0s - loss: 0.1794 - accuracy: 0.9216 - val_loss: 0.1590 - val_accuracy: 0.9492 - 434ms/epoch - 12ms/step Epoch 14/20 36/36 - 0s - loss: 0.1811 - accuracy: 0.9262 - val_loss: 0.1592 - val_accuracy: 0.9563 - 436ms/epoch - 12ms/step Epoch 15/20 36/36 - 0s - loss: 0.2277 - accuracy: 0.9161 - val_loss: 0.1575 - val_accuracy: 0.9568 - 410ms/epoch - 11ms/step Epoch 16/20 36/36 - 0s - loss: 0.2121 - accuracy: 0.9181 - val_loss: 0.1863 - val_accuracy: 0.9538 - 439ms/epoch - 12ms/step Epoch 17/20 36/36 - 0s - loss: 0.1939 - accuracy: 0.9141 - val_loss: 0.1577 - val_accuracy: 0.9533 - 381ms/epoch - 11ms/step Epoch 18/20 36/36 - 0s - loss: 0.1901 - accuracy: 0.9202 - val_loss: 0.1579 - val_accuracy: 0.9583 - 418ms/epoch - 12ms/step Epoch 19/20 36/36 - 0s - loss: 0.2056 - accuracy: 0.9128 - val_loss: 0.1467 - val_accuracy: 0.9543 - 380ms/epoch - 11ms/step Epoch 20/20 36/36 - 0s - loss: 0.2018 - accuracy: 0.9057 - val_loss: 0.1919 - val_accuracy: 0.9512 - 405ms/epoch - 11ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 64 | Dropout Rate 0.7 | FilterSize 5]: 0.9512194991111755 Epoch 1/20 36/36 - 1s - loss: 0.6803 - accuracy: 0.7899 - val_loss: 0.1954 - val_accuracy: 0.9522 - 1s/epoch - 37ms/step Epoch 2/20 36/36 - 0s - loss: 0.2362 - accuracy: 0.9074 - val_loss: 0.1833 - val_accuracy: 0.9436 - 364ms/epoch - 10ms/step Epoch 3/20 36/36 - 0s - loss: 0.2160 - accuracy: 0.9164 - val_loss: 0.1445 - val_accuracy: 0.9538 - 401ms/epoch - 11ms/step Epoch 4/20 36/36 - 0s - loss: 0.2185 - accuracy: 0.9045 - val_loss: 0.1511 - val_accuracy: 0.9527 - 360ms/epoch - 10ms/step Epoch 5/20 36/36 - 0s - loss: 0.2001 - accuracy: 0.9185 - val_loss: 0.1471 - val_accuracy: 0.9553 - 350ms/epoch - 10ms/step Epoch 6/20 36/36 - 0s - loss: 0.2103 - accuracy: 0.9142 - val_loss: 0.1698 - val_accuracy: 0.9558 - 405ms/epoch - 11ms/step Epoch 7/20 36/36 - 1s - loss: 0.2052 - accuracy: 0.9180 - val_loss: 0.1714 - val_accuracy: 0.9578 - 513ms/epoch - 14ms/step Epoch 8/20 36/36 - 1s - loss: 0.2102 - accuracy: 0.9061 - val_loss: 0.1670 - val_accuracy: 0.9593 - 574ms/epoch - 16ms/step Epoch 9/20 36/36 - 1s - loss: 0.1995 - accuracy: 0.9208 - val_loss: 0.1652 - val_accuracy: 0.9599 - 598ms/epoch - 17ms/step Epoch 10/20 36/36 - 1s - loss: 0.2220 - accuracy: 0.9122 - val_loss: 0.1705 - val_accuracy: 0.9543 - 596ms/epoch - 17ms/step Epoch 11/20 36/36 - 1s - loss: 0.1989 - accuracy: 0.9194 - val_loss: 0.1786 - val_accuracy: 0.9502 - 603ms/epoch - 17ms/step Epoch 12/20 36/36 - 1s - loss: 0.2039 - accuracy: 0.9129 - val_loss: 0.2219 - val_accuracy: 0.9522 - 610ms/epoch - 17ms/step Epoch 13/20 36/36 - 1s - loss: 0.2215 - accuracy: 0.9109 - val_loss: 0.1583 - val_accuracy: 0.9538 - 599ms/epoch - 17ms/step Epoch 14/20 36/36 - 1s - loss: 0.2281 - accuracy: 0.9085 - val_loss: 0.2339 - val_accuracy: 0.9502 - 694ms/epoch - 19ms/step Epoch 15/20 36/36 - 1s - loss: 0.3021 - accuracy: 0.8771 - val_loss: 0.2203 - val_accuracy: 0.9502 - 671ms/epoch - 19ms/step Epoch 16/20 36/36 - 1s - loss: 0.2812 - accuracy: 0.8599 - val_loss: 0.2157 - val_accuracy: 0.9472 - 646ms/epoch - 18ms/step Epoch 17/20 36/36 - 1s - loss: 0.2713 - accuracy: 0.8711 - val_loss: 0.2152 - val_accuracy: 0.9492 - 574ms/epoch - 16ms/step Epoch 18/20 36/36 - 1s - loss: 0.2452 - accuracy: 0.8780 - val_loss: 0.3420 - val_accuracy: 0.9507 - 546ms/epoch - 15ms/step Epoch 19/20 36/36 - 1s - loss: 0.4254 - accuracy: 0.8340 - val_loss: 0.3600 - val_accuracy: 0.8989 - 571ms/epoch - 16ms/step Epoch 20/20 36/36 - 1s - loss: 0.4816 - accuracy: 0.7188 - val_loss: 0.3817 - val_accuracy: 0.8928 - 575ms/epoch - 16ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 64 | Dropout Rate 0.7 | FilterSize 7]: 0.8927845358848572 Epoch 1/20 36/36 - 3s - loss: 1.9836 - accuracy: 0.8075 - val_loss: 0.1632 - val_accuracy: 0.9416 - 3s/epoch - 93ms/step Epoch 2/20 36/36 - 1s - loss: 0.1543 - accuracy: 0.9378 - val_loss: 0.1391 - val_accuracy: 0.9390 - 613ms/epoch - 17ms/step Epoch 3/20 36/36 - 1s - loss: 0.1440 - accuracy: 0.9408 - val_loss: 0.1428 - val_accuracy: 0.9456 - 604ms/epoch - 17ms/step Epoch 4/20 36/36 - 1s - loss: 0.1290 - accuracy: 0.9470 - val_loss: 0.1069 - val_accuracy: 0.9578 - 601ms/epoch - 17ms/step Epoch 5/20 36/36 - 1s - loss: 0.1399 - accuracy: 0.9451 - val_loss: 0.1185 - val_accuracy: 0.9599 - 628ms/epoch - 17ms/step Epoch 6/20 36/36 - 1s - loss: 0.1195 - accuracy: 0.9503 - val_loss: 0.1472 - val_accuracy: 0.9375 - 610ms/epoch - 17ms/step Epoch 7/20 36/36 - 1s - loss: 0.1180 - accuracy: 0.9525 - val_loss: 0.1077 - val_accuracy: 0.9538 - 729ms/epoch - 20ms/step Epoch 8/20 36/36 - 1s - loss: 0.1383 - accuracy: 0.9475 - val_loss: 0.1262 - val_accuracy: 0.9543 - 754ms/epoch - 21ms/step Epoch 9/20 36/36 - 1s - loss: 0.1244 - accuracy: 0.9511 - val_loss: 0.0955 - val_accuracy: 0.9665 - 716ms/epoch - 20ms/step Epoch 10/20 36/36 - 1s - loss: 0.1111 - accuracy: 0.9550 - val_loss: 0.1022 - val_accuracy: 0.9568 - 673ms/epoch - 19ms/step Epoch 11/20 36/36 - 1s - loss: 0.1115 - accuracy: 0.9550 - val_loss: 0.1076 - val_accuracy: 0.9604 - 662ms/epoch - 18ms/step Epoch 12/20 36/36 - 1s - loss: 0.1112 - accuracy: 0.9551 - val_loss: 0.1055 - val_accuracy: 0.9614 - 641ms/epoch - 18ms/step Epoch 13/20 36/36 - 1s - loss: 0.0987 - accuracy: 0.9605 - val_loss: 0.1061 - val_accuracy: 0.9634 - 673ms/epoch - 19ms/step Epoch 14/20 36/36 - 1s - loss: 0.1045 - accuracy: 0.9579 - val_loss: 0.0925 - val_accuracy: 0.9736 - 918ms/epoch - 26ms/step Epoch 15/20 36/36 - 1s - loss: 0.1053 - accuracy: 0.9555 - val_loss: 0.1021 - val_accuracy: 0.9619 - 1s/epoch - 28ms/step Epoch 16/20 36/36 - 1s - loss: 0.1001 - accuracy: 0.9596 - val_loss: 0.0992 - val_accuracy: 0.9695 - 1s/epoch - 28ms/step Epoch 17/20 36/36 - 1s - loss: 0.0994 - accuracy: 0.9616 - val_loss: 0.0937 - val_accuracy: 0.9695 - 1s/epoch - 28ms/step Epoch 18/20 36/36 - 1s - loss: 0.0911 - accuracy: 0.9644 - val_loss: 0.1217 - val_accuracy: 0.9634 - 995ms/epoch - 28ms/step Epoch 19/20 36/36 - 1s - loss: 0.0983 - accuracy: 0.9615 - val_loss: 0.1204 - val_accuracy: 0.9543 - 909ms/epoch - 25ms/step Epoch 20/20 36/36 - 1s - loss: 0.0952 - accuracy: 0.9630 - val_loss: 0.0989 - val_accuracy: 0.9695 - 617ms/epoch - 17ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 128 | Dropout Rate 0.3 | FilterSize 3]: 0.9695122241973877 Epoch 1/20 36/36 - 2s - loss: 2.8663 - accuracy: 0.7484 - val_loss: 0.2822 - val_accuracy: 0.8435 - 2s/epoch - 45ms/step Epoch 2/20 36/36 - 1s - loss: 0.2530 - accuracy: 0.8934 - val_loss: 0.1432 - val_accuracy: 0.9466 - 649ms/epoch - 18ms/step Epoch 3/20 36/36 - 1s - loss: 0.1839 - accuracy: 0.9146 - val_loss: 0.1217 - val_accuracy: 0.9553 - 627ms/epoch - 17ms/step Epoch 4/20 36/36 - 1s - loss: 0.1531 - accuracy: 0.9234 - val_loss: 0.1166 - val_accuracy: 0.9543 - 622ms/epoch - 17ms/step Epoch 5/20 36/36 - 1s - loss: 0.1405 - accuracy: 0.9330 - val_loss: 0.1270 - val_accuracy: 0.9604 - 642ms/epoch - 18ms/step Epoch 6/20 36/36 - 1s - loss: 0.1459 - accuracy: 0.9285 - val_loss: 0.1537 - val_accuracy: 0.9543 - 651ms/epoch - 18ms/step Epoch 7/20 36/36 - 1s - loss: 0.1576 - accuracy: 0.9239 - val_loss: 0.1845 - val_accuracy: 0.9527 - 645ms/epoch - 18ms/step Epoch 8/20 36/36 - 1s - loss: 0.1837 - accuracy: 0.9195 - val_loss: 0.1555 - val_accuracy: 0.9543 - 587ms/epoch - 16ms/step Epoch 9/20 36/36 - 1s - loss: 0.1546 - accuracy: 0.9255 - val_loss: 0.1520 - val_accuracy: 0.9548 - 688ms/epoch - 19ms/step Epoch 10/20 36/36 - 1s - loss: 0.1415 - accuracy: 0.9253 - val_loss: 0.1142 - val_accuracy: 0.9477 - 720ms/epoch - 20ms/step Epoch 11/20 36/36 - 1s - loss: 0.1308 - accuracy: 0.9318 - val_loss: 0.1277 - val_accuracy: 0.9400 - 686ms/epoch - 19ms/step Epoch 12/20 36/36 - 1s - loss: 0.1235 - accuracy: 0.9352 - val_loss: 0.1309 - val_accuracy: 0.9451 - 586ms/epoch - 16ms/step Epoch 13/20 36/36 - 1s - loss: 0.1270 - accuracy: 0.9358 - val_loss: 0.1760 - val_accuracy: 0.9599 - 717ms/epoch - 20ms/step Epoch 14/20 36/36 - 1s - loss: 0.1274 - accuracy: 0.9364 - val_loss: 0.1419 - val_accuracy: 0.9573 - 916ms/epoch - 25ms/step Epoch 15/20 36/36 - 1s - loss: 0.1215 - accuracy: 0.9400 - val_loss: 0.1292 - val_accuracy: 0.9599 - 980ms/epoch - 27ms/step Epoch 16/20 36/36 - 1s - loss: 0.1222 - accuracy: 0.9385 - val_loss: 0.1349 - val_accuracy: 0.9619 - 998ms/epoch - 28ms/step Epoch 17/20 36/36 - 1s - loss: 0.1354 - accuracy: 0.9353 - val_loss: 0.1167 - val_accuracy: 0.9390 - 932ms/epoch - 26ms/step Epoch 18/20 36/36 - 1s - loss: 0.1183 - accuracy: 0.9394 - val_loss: 0.1698 - val_accuracy: 0.8765 - 936ms/epoch - 26ms/step Epoch 19/20 36/36 - 1s - loss: 0.1360 - accuracy: 0.9287 - val_loss: 0.1353 - val_accuracy: 0.9599 - 676ms/epoch - 19ms/step Epoch 20/20 36/36 - 1s - loss: 0.1287 - accuracy: 0.9292 - val_loss: 0.1282 - val_accuracy: 0.9355 - 702ms/epoch - 20ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 128 | Dropout Rate 0.3 | FilterSize 5]: 0.9354674816131592 Epoch 1/20 36/36 - 2s - loss: 2.9478 - accuracy: 0.7396 - val_loss: 0.2351 - val_accuracy: 0.9126 - 2s/epoch - 52ms/step Epoch 2/20 36/36 - 1s - loss: 0.2012 - accuracy: 0.9064 - val_loss: 0.1431 - val_accuracy: 0.9375 - 597ms/epoch - 17ms/step Epoch 3/20 36/36 - 1s - loss: 0.1544 - accuracy: 0.9274 - val_loss: 0.1362 - val_accuracy: 0.9365 - 611ms/epoch - 17ms/step Epoch 4/20 36/36 - 1s - loss: 0.1439 - accuracy: 0.9302 - val_loss: 0.1629 - val_accuracy: 0.9217 - 602ms/epoch - 17ms/step Epoch 5/20 36/36 - 1s - loss: 0.1505 - accuracy: 0.9171 - val_loss: 0.1341 - val_accuracy: 0.9538 - 628ms/epoch - 17ms/step Epoch 6/20 36/36 - 1s - loss: 0.1418 - accuracy: 0.9281 - val_loss: 0.1439 - val_accuracy: 0.9339 - 690ms/epoch - 19ms/step Epoch 7/20 36/36 - 1s - loss: 0.1243 - accuracy: 0.9360 - val_loss: 0.1359 - val_accuracy: 0.9400 - 673ms/epoch - 19ms/step Epoch 8/20 36/36 - 1s - loss: 0.1267 - accuracy: 0.9356 - val_loss: 0.1750 - val_accuracy: 0.8882 - 601ms/epoch - 17ms/step Epoch 9/20 36/36 - 1s - loss: 0.1309 - accuracy: 0.9383 - val_loss: 0.1747 - val_accuracy: 0.9268 - 616ms/epoch - 17ms/step Epoch 10/20 36/36 - 1s - loss: 0.1158 - accuracy: 0.9412 - val_loss: 0.1269 - val_accuracy: 0.9350 - 624ms/epoch - 17ms/step Epoch 11/20 36/36 - 1s - loss: 0.1191 - accuracy: 0.9388 - val_loss: 0.1456 - val_accuracy: 0.9197 - 620ms/epoch - 17ms/step Epoch 12/20 36/36 - 1s - loss: 0.1172 - accuracy: 0.9439 - val_loss: 0.1613 - val_accuracy: 0.9319 - 642ms/epoch - 18ms/step Epoch 13/20 36/36 - 1s - loss: 0.1103 - accuracy: 0.9438 - val_loss: 0.1164 - val_accuracy: 0.9329 - 863ms/epoch - 24ms/step Epoch 14/20 36/36 - 1s - loss: 0.1098 - accuracy: 0.9451 - val_loss: 0.1539 - val_accuracy: 0.9375 - 840ms/epoch - 23ms/step Epoch 15/20 36/36 - 1s - loss: 0.1224 - accuracy: 0.9376 - val_loss: 0.1172 - val_accuracy: 0.9654 - 845ms/epoch - 23ms/step Epoch 16/20 36/36 - 1s - loss: 0.1172 - accuracy: 0.9362 - val_loss: 0.1431 - val_accuracy: 0.9167 - 847ms/epoch - 24ms/step Epoch 17/20 36/36 - 1s - loss: 0.1244 - accuracy: 0.9373 - val_loss: 0.1426 - val_accuracy: 0.9624 - 959ms/epoch - 27ms/step Epoch 18/20 36/36 - 1s - loss: 0.1156 - accuracy: 0.9387 - val_loss: 0.1307 - val_accuracy: 0.9360 - 758ms/epoch - 21ms/step Epoch 19/20 36/36 - 1s - loss: 0.1090 - accuracy: 0.9459 - val_loss: 0.1354 - val_accuracy: 0.9375 - 581ms/epoch - 16ms/step Epoch 20/20 36/36 - 1s - loss: 0.1164 - accuracy: 0.9447 - val_loss: 0.1860 - val_accuracy: 0.8953 - 586ms/epoch - 16ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 128 | Dropout Rate 0.3 | FilterSize 7]: 0.8953251838684082 Epoch 1/20 36/36 - 2s - loss: 3.4459 - accuracy: 0.7478 - val_loss: 0.1817 - val_accuracy: 0.9268 - 2s/epoch - 52ms/step Epoch 2/20 36/36 - 1s - loss: 0.2273 - accuracy: 0.9125 - val_loss: 0.1537 - val_accuracy: 0.9431 - 796ms/epoch - 22ms/step Epoch 3/20 36/36 - 1s - loss: 0.1681 - accuracy: 0.9289 - val_loss: 0.1601 - val_accuracy: 0.9436 - 696ms/epoch - 19ms/step Epoch 4/20 36/36 - 1s - loss: 0.1598 - accuracy: 0.9340 - val_loss: 0.1513 - val_accuracy: 0.9461 - 704ms/epoch - 20ms/step Epoch 5/20 36/36 - 1s - loss: 0.1542 - accuracy: 0.9339 - val_loss: 0.1510 - val_accuracy: 0.9456 - 684ms/epoch - 19ms/step Epoch 6/20 36/36 - 1s - loss: 0.1575 - accuracy: 0.9340 - val_loss: 0.1611 - val_accuracy: 0.9395 - 715ms/epoch - 20ms/step Epoch 7/20 36/36 - 1s - loss: 0.1534 - accuracy: 0.9331 - val_loss: 0.1303 - val_accuracy: 0.9461 - 733ms/epoch - 20ms/step Epoch 8/20 36/36 - 1s - loss: 0.1399 - accuracy: 0.9373 - val_loss: 0.1246 - val_accuracy: 0.9568 - 703ms/epoch - 20ms/step Epoch 9/20 36/36 - 1s - loss: 0.1425 - accuracy: 0.9329 - val_loss: 0.1228 - val_accuracy: 0.9482 - 685ms/epoch - 19ms/step Epoch 10/20 36/36 - 1s - loss: 0.1284 - accuracy: 0.9377 - val_loss: 0.1634 - val_accuracy: 0.9451 - 681ms/epoch - 19ms/step Epoch 11/20 36/36 - 1s - loss: 0.1318 - accuracy: 0.9370 - val_loss: 0.1091 - val_accuracy: 0.9517 - 966ms/epoch - 27ms/step Epoch 12/20 36/36 - 1s - loss: 0.1341 - accuracy: 0.9359 - val_loss: 0.1228 - val_accuracy: 0.9502 - 1s/epoch - 28ms/step Epoch 13/20 36/36 - 1s - loss: 0.1321 - accuracy: 0.9346 - val_loss: 0.1453 - val_accuracy: 0.9466 - 1s/epoch - 29ms/step Epoch 14/20 36/36 - 1s - loss: 0.1341 - accuracy: 0.9400 - val_loss: 0.1275 - val_accuracy: 0.9472 - 1s/epoch - 29ms/step Epoch 15/20 36/36 - 1s - loss: 0.1407 - accuracy: 0.9351 - val_loss: 0.1158 - val_accuracy: 0.9507 - 1s/epoch - 29ms/step Epoch 16/20 36/36 - 1s - loss: 0.1360 - accuracy: 0.9351 - val_loss: 0.1445 - val_accuracy: 0.9472 - 964ms/epoch - 27ms/step Epoch 17/20 36/36 - 1s - loss: 0.1438 - accuracy: 0.9332 - val_loss: 0.1229 - val_accuracy: 0.9578 - 679ms/epoch - 19ms/step Epoch 18/20 36/36 - 1s - loss: 0.1267 - accuracy: 0.9365 - val_loss: 0.1199 - val_accuracy: 0.9497 - 636ms/epoch - 18ms/step Epoch 19/20 36/36 - 1s - loss: 0.1306 - accuracy: 0.9375 - val_loss: 0.1131 - val_accuracy: 0.9512 - 775ms/epoch - 22ms/step Epoch 20/20 36/36 - 1s - loss: 0.1265 - accuracy: 0.9369 - val_loss: 0.1250 - val_accuracy: 0.9360 - 663ms/epoch - 18ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 128 | Dropout Rate 0.5 | FilterSize 3]: 0.9359756112098694 Epoch 1/20 36/36 - 2s - loss: 1.7760 - accuracy: 0.7579 - val_loss: 0.2515 - val_accuracy: 0.9207 - 2s/epoch - 47ms/step Epoch 2/20 36/36 - 1s - loss: 0.2362 - accuracy: 0.8926 - val_loss: 0.1872 - val_accuracy: 0.9304 - 665ms/epoch - 18ms/step Epoch 3/20 36/36 - 1s - loss: 0.2075 - accuracy: 0.9118 - val_loss: 0.1549 - val_accuracy: 0.9507 - 764ms/epoch - 21ms/step Epoch 4/20 36/36 - 1s - loss: 0.1833 - accuracy: 0.9224 - val_loss: 0.1477 - val_accuracy: 0.9538 - 737ms/epoch - 20ms/step Epoch 5/20 36/36 - 1s - loss: 0.1755 - accuracy: 0.9307 - val_loss: 0.1566 - val_accuracy: 0.9497 - 631ms/epoch - 18ms/step Epoch 6/20 36/36 - 1s - loss: 0.1800 - accuracy: 0.9313 - val_loss: 0.1831 - val_accuracy: 0.9466 - 604ms/epoch - 17ms/step Epoch 7/20 36/36 - 1s - loss: 0.1712 - accuracy: 0.9322 - val_loss: 0.1245 - val_accuracy: 0.9629 - 557ms/epoch - 15ms/step Epoch 8/20 36/36 - 1s - loss: 0.1579 - accuracy: 0.9362 - val_loss: 0.1369 - val_accuracy: 0.9548 - 563ms/epoch - 16ms/step Epoch 9/20 36/36 - 1s - loss: 0.1509 - accuracy: 0.9364 - val_loss: 0.1813 - val_accuracy: 0.9482 - 597ms/epoch - 17ms/step Epoch 10/20 36/36 - 1s - loss: 0.1418 - accuracy: 0.9428 - val_loss: 0.1211 - val_accuracy: 0.9533 - 932ms/epoch - 26ms/step Epoch 11/20 36/36 - 1s - loss: 0.1501 - accuracy: 0.9381 - val_loss: 0.1408 - val_accuracy: 0.9573 - 1s/epoch - 28ms/step Epoch 12/20 36/36 - 1s - loss: 0.1415 - accuracy: 0.9414 - val_loss: 0.1090 - val_accuracy: 0.9609 - 1s/epoch - 28ms/step Epoch 13/20 36/36 - 1s - loss: 0.1471 - accuracy: 0.9385 - val_loss: 0.1580 - val_accuracy: 0.9502 - 991ms/epoch - 28ms/step Epoch 14/20 36/36 - 1s - loss: 0.1431 - accuracy: 0.9392 - val_loss: 0.1121 - val_accuracy: 0.9604 - 974ms/epoch - 27ms/step Epoch 15/20 36/36 - 1s - loss: 0.1434 - accuracy: 0.9408 - val_loss: 0.1076 - val_accuracy: 0.9604 - 1s/epoch - 30ms/step Epoch 16/20 36/36 - 1s - loss: 0.1539 - accuracy: 0.9358 - val_loss: 0.1326 - val_accuracy: 0.9507 - 848ms/epoch - 24ms/step Epoch 17/20 36/36 - 1s - loss: 0.1430 - accuracy: 0.9387 - val_loss: 0.1036 - val_accuracy: 0.9634 - 666ms/epoch - 19ms/step Epoch 18/20 36/36 - 1s - loss: 0.1454 - accuracy: 0.9385 - val_loss: 0.1480 - val_accuracy: 0.9563 - 679ms/epoch - 19ms/step Epoch 19/20 36/36 - 1s - loss: 0.1666 - accuracy: 0.9279 - val_loss: 0.1729 - val_accuracy: 0.9553 - 673ms/epoch - 19ms/step Epoch 20/20 36/36 - 1s - loss: 0.1547 - accuracy: 0.9377 - val_loss: 0.1163 - val_accuracy: 0.9629 - 664ms/epoch - 18ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 128 | Dropout Rate 0.5 | FilterSize 5]: 0.9629064798355103 Epoch 1/20 36/36 - 2s - loss: 2.4421 - accuracy: 0.6828 - val_loss: 0.2388 - val_accuracy: 0.8908 - 2s/epoch - 50ms/step Epoch 2/20 36/36 - 1s - loss: 0.2723 - accuracy: 0.8690 - val_loss: 0.1889 - val_accuracy: 0.9400 - 885ms/epoch - 25ms/step Epoch 3/20 36/36 - 1s - loss: 0.2372 - accuracy: 0.8958 - val_loss: 0.1619 - val_accuracy: 0.9512 - 852ms/epoch - 24ms/step Epoch 4/20 36/36 - 1s - loss: 0.2035 - accuracy: 0.9067 - val_loss: 0.1567 - val_accuracy: 0.9563 - 828ms/epoch - 23ms/step Epoch 5/20 36/36 - 1s - loss: 0.1785 - accuracy: 0.9205 - val_loss: 0.1612 - val_accuracy: 0.9558 - 819ms/epoch - 23ms/step Epoch 6/20 36/36 - 1s - loss: 0.1757 - accuracy: 0.9225 - val_loss: 0.1222 - val_accuracy: 0.9563 - 833ms/epoch - 23ms/step Epoch 7/20 36/36 - 1s - loss: 0.1759 - accuracy: 0.9173 - val_loss: 0.1701 - val_accuracy: 0.9568 - 667ms/epoch - 19ms/step Epoch 8/20 36/36 - 1s - loss: 0.1768 - accuracy: 0.9229 - val_loss: 0.1447 - val_accuracy: 0.9629 - 555ms/epoch - 15ms/step Epoch 9/20 36/36 - 1s - loss: 0.1612 - accuracy: 0.9303 - val_loss: 0.1653 - val_accuracy: 0.9543 - 528ms/epoch - 15ms/step Epoch 10/20 36/36 - 1s - loss: 0.1587 - accuracy: 0.9305 - val_loss: 0.1344 - val_accuracy: 0.9624 - 555ms/epoch - 15ms/step Epoch 11/20 36/36 - 1s - loss: 0.1552 - accuracy: 0.9347 - val_loss: 0.1259 - val_accuracy: 0.9629 - 563ms/epoch - 16ms/step Epoch 12/20 36/36 - 1s - loss: 0.1566 - accuracy: 0.9332 - val_loss: 0.1457 - val_accuracy: 0.9624 - 558ms/epoch - 16ms/step Epoch 13/20 36/36 - 1s - loss: 0.1441 - accuracy: 0.9382 - val_loss: 0.1351 - val_accuracy: 0.9507 - 633ms/epoch - 18ms/step Epoch 14/20 36/36 - 1s - loss: 0.1526 - accuracy: 0.9357 - val_loss: 0.1083 - val_accuracy: 0.9599 - 634ms/epoch - 18ms/step Epoch 15/20 36/36 - 1s - loss: 0.1482 - accuracy: 0.9348 - val_loss: 0.1076 - val_accuracy: 0.9660 - 622ms/epoch - 17ms/step Epoch 16/20 36/36 - 1s - loss: 0.1489 - accuracy: 0.9326 - val_loss: 0.1306 - val_accuracy: 0.9604 - 594ms/epoch - 16ms/step Epoch 17/20 36/36 - 1s - loss: 0.1372 - accuracy: 0.9361 - val_loss: 0.1344 - val_accuracy: 0.9660 - 553ms/epoch - 15ms/step Epoch 18/20 36/36 - 1s - loss: 0.1393 - accuracy: 0.9371 - val_loss: 0.1142 - val_accuracy: 0.9665 - 568ms/epoch - 16ms/step Epoch 19/20 36/36 - 1s - loss: 0.1310 - accuracy: 0.9408 - val_loss: 0.1218 - val_accuracy: 0.9639 - 587ms/epoch - 16ms/step Epoch 20/20 36/36 - 1s - loss: 0.1366 - accuracy: 0.9363 - val_loss: 0.1326 - val_accuracy: 0.9568 - 585ms/epoch - 16ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 128 | Dropout Rate 0.5 | FilterSize 7]: 0.9568089246749878 Epoch 1/20 36/36 - 2s - loss: 1.4128 - accuracy: 0.7556 - val_loss: 0.2215 - val_accuracy: 0.9273 - 2s/epoch - 50ms/step Epoch 2/20 36/36 - 1s - loss: 0.2518 - accuracy: 0.8960 - val_loss: 0.1685 - val_accuracy: 0.9360 - 699ms/epoch - 19ms/step Epoch 3/20 36/36 - 1s - loss: 0.2149 - accuracy: 0.9064 - val_loss: 0.1469 - val_accuracy: 0.9400 - 660ms/epoch - 18ms/step Epoch 4/20 36/36 - 1s - loss: 0.2006 - accuracy: 0.9061 - val_loss: 0.1491 - val_accuracy: 0.9456 - 677ms/epoch - 19ms/step Epoch 5/20 36/36 - 1s - loss: 0.1865 - accuracy: 0.9126 - val_loss: 0.1277 - val_accuracy: 0.9375 - 675ms/epoch - 19ms/step Epoch 6/20 36/36 - 1s - loss: 0.1690 - accuracy: 0.9212 - val_loss: 0.1371 - val_accuracy: 0.9395 - 663ms/epoch - 18ms/step Epoch 7/20 36/36 - 1s - loss: 0.1679 - accuracy: 0.9198 - val_loss: 0.1639 - val_accuracy: 0.9151 - 711ms/epoch - 20ms/step Epoch 8/20 36/36 - 1s - loss: 0.1847 - accuracy: 0.9148 - val_loss: 0.1211 - val_accuracy: 0.9370 - 696ms/epoch - 19ms/step Epoch 9/20 36/36 - 1s - loss: 0.1717 - accuracy: 0.9167 - val_loss: 0.1321 - val_accuracy: 0.9390 - 682ms/epoch - 19ms/step Epoch 10/20 36/36 - 1s - loss: 0.1659 - accuracy: 0.9204 - val_loss: 0.1597 - val_accuracy: 0.9212 - 950ms/epoch - 26ms/step Epoch 11/20 36/36 - 1s - loss: 0.1639 - accuracy: 0.9173 - val_loss: 0.1676 - val_accuracy: 0.9411 - 1s/epoch - 30ms/step Epoch 12/20 36/36 - 1s - loss: 0.1692 - accuracy: 0.9205 - val_loss: 0.1385 - val_accuracy: 0.9553 - 1s/epoch - 32ms/step Epoch 13/20 36/36 - 1s - loss: 0.1684 - accuracy: 0.9208 - val_loss: 0.1465 - val_accuracy: 0.9548 - 1s/epoch - 32ms/step Epoch 14/20 36/36 - 1s - loss: 0.1647 - accuracy: 0.9198 - val_loss: 0.1368 - val_accuracy: 0.9563 - 940ms/epoch - 26ms/step Epoch 15/20 36/36 - 1s - loss: 0.1719 - accuracy: 0.9227 - val_loss: 0.1414 - val_accuracy: 0.9482 - 673ms/epoch - 19ms/step Epoch 16/20 36/36 - 1s - loss: 0.1668 - accuracy: 0.9187 - val_loss: 0.1444 - val_accuracy: 0.9502 - 616ms/epoch - 17ms/step Epoch 17/20 36/36 - 1s - loss: 0.1763 - accuracy: 0.9176 - val_loss: 0.1578 - val_accuracy: 0.9466 - 603ms/epoch - 17ms/step Epoch 18/20 36/36 - 1s - loss: 0.1663 - accuracy: 0.9205 - val_loss: 0.1289 - val_accuracy: 0.9563 - 621ms/epoch - 17ms/step Epoch 19/20 36/36 - 1s - loss: 0.1603 - accuracy: 0.9187 - val_loss: 0.1427 - val_accuracy: 0.9578 - 572ms/epoch - 16ms/step Epoch 20/20 36/36 - 1s - loss: 0.1639 - accuracy: 0.9156 - val_loss: 0.1684 - val_accuracy: 0.9441 - 611ms/epoch - 17ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 128 | Dropout Rate 0.7 | FilterSize 3]: 0.9441056847572327 Epoch 1/20 36/36 - 2s - loss: 1.4300 - accuracy: 0.7351 - val_loss: 0.2067 - val_accuracy: 0.9395 - 2s/epoch - 64ms/step Epoch 2/20 36/36 - 1s - loss: 0.2693 - accuracy: 0.8771 - val_loss: 0.2020 - val_accuracy: 0.9512 - 981ms/epoch - 27ms/step Epoch 3/20 36/36 - 1s - loss: 0.2556 - accuracy: 0.8800 - val_loss: 0.1719 - val_accuracy: 0.9563 - 1s/epoch - 29ms/step Epoch 4/20 36/36 - 1s - loss: 0.2617 - accuracy: 0.8799 - val_loss: 0.2019 - val_accuracy: 0.9558 - 994ms/epoch - 28ms/step Epoch 5/20 36/36 - 1s - loss: 0.2400 - accuracy: 0.8891 - val_loss: 0.1631 - val_accuracy: 0.9614 - 923ms/epoch - 26ms/step Epoch 6/20 36/36 - 1s - loss: 0.2676 - accuracy: 0.8739 - val_loss: 0.1797 - val_accuracy: 0.9553 - 712ms/epoch - 20ms/step Epoch 7/20 36/36 - 1s - loss: 0.2636 - accuracy: 0.8758 - val_loss: 0.1866 - val_accuracy: 0.9472 - 626ms/epoch - 17ms/step Epoch 8/20 36/36 - 1s - loss: 0.2589 - accuracy: 0.8754 - val_loss: 0.1677 - val_accuracy: 0.9553 - 621ms/epoch - 17ms/step Epoch 9/20 36/36 - 1s - loss: 0.2587 - accuracy: 0.8786 - val_loss: 0.1830 - val_accuracy: 0.9477 - 622ms/epoch - 17ms/step Epoch 10/20 36/36 - 1s - loss: 0.2415 - accuracy: 0.8851 - val_loss: 0.1807 - val_accuracy: 0.9553 - 654ms/epoch - 18ms/step Epoch 11/20 36/36 - 1s - loss: 0.2476 - accuracy: 0.8801 - val_loss: 0.1692 - val_accuracy: 0.9568 - 653ms/epoch - 18ms/step Epoch 12/20 36/36 - 1s - loss: 0.2944 - accuracy: 0.8615 - val_loss: 0.2192 - val_accuracy: 0.9497 - 645ms/epoch - 18ms/step Epoch 13/20 36/36 - 1s - loss: 0.2766 - accuracy: 0.8481 - val_loss: 0.1846 - val_accuracy: 0.9563 - 626ms/epoch - 17ms/step Epoch 14/20 36/36 - 1s - loss: 0.2544 - accuracy: 0.8674 - val_loss: 0.1575 - val_accuracy: 0.9578 - 664ms/epoch - 18ms/step Epoch 15/20 36/36 - 1s - loss: 0.2423 - accuracy: 0.8796 - val_loss: 0.1345 - val_accuracy: 0.9578 - 660ms/epoch - 18ms/step Epoch 16/20 36/36 - 1s - loss: 0.2308 - accuracy: 0.8879 - val_loss: 0.1502 - val_accuracy: 0.9487 - 600ms/epoch - 17ms/step Epoch 17/20 36/36 - 1s - loss: 0.2285 - accuracy: 0.8868 - val_loss: 0.1728 - val_accuracy: 0.9578 - 592ms/epoch - 16ms/step Epoch 18/20 36/36 - 1s - loss: 0.2775 - accuracy: 0.8756 - val_loss: 0.1718 - val_accuracy: 0.9558 - 567ms/epoch - 16ms/step Epoch 19/20 36/36 - 1s - loss: 0.2707 - accuracy: 0.8669 - val_loss: 0.2028 - val_accuracy: 0.9507 - 751ms/epoch - 21ms/step Epoch 20/20 36/36 - 1s - loss: 0.3048 - accuracy: 0.8285 - val_loss: 0.2362 - val_accuracy: 0.9522 - 719ms/epoch - 20ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 128 | Dropout Rate 0.7 | FilterSize 5]: 0.952235758304596 Epoch 1/20 36/36 - 2s - loss: 2.2991 - accuracy: 0.6856 - val_loss: 0.2051 - val_accuracy: 0.9284 - 2s/epoch - 46ms/step Epoch 2/20 36/36 - 1s - loss: 0.3057 - accuracy: 0.8430 - val_loss: 0.1972 - val_accuracy: 0.8918 - 584ms/epoch - 16ms/step Epoch 3/20 36/36 - 1s - loss: 0.2360 - accuracy: 0.8842 - val_loss: 0.1901 - val_accuracy: 0.9197 - 539ms/epoch - 15ms/step Epoch 4/20 36/36 - 0s - loss: 0.2210 - accuracy: 0.8900 - val_loss: 0.1281 - val_accuracy: 0.9405 - 495ms/epoch - 14ms/step Epoch 5/20 36/36 - 0s - loss: 0.1977 - accuracy: 0.9032 - val_loss: 0.1301 - val_accuracy: 0.9446 - 489ms/epoch - 14ms/step Epoch 6/20 36/36 - 1s - loss: 0.1853 - accuracy: 0.9188 - val_loss: 0.1309 - val_accuracy: 0.9390 - 531ms/epoch - 15ms/step Epoch 7/20 36/36 - 0s - loss: 0.1779 - accuracy: 0.9125 - val_loss: 0.1568 - val_accuracy: 0.9319 - 490ms/epoch - 14ms/step Epoch 8/20 36/36 - 1s - loss: 0.1844 - accuracy: 0.9167 - val_loss: 0.1254 - val_accuracy: 0.9456 - 503ms/epoch - 14ms/step Epoch 9/20 36/36 - 1s - loss: 0.1830 - accuracy: 0.9193 - val_loss: 0.1544 - val_accuracy: 0.9309 - 526ms/epoch - 15ms/step Epoch 10/20 36/36 - 1s - loss: 0.1835 - accuracy: 0.9202 - val_loss: 0.1677 - val_accuracy: 0.9238 - 517ms/epoch - 14ms/step Epoch 11/20 36/36 - 1s - loss: 0.1793 - accuracy: 0.9009 - val_loss: 0.1373 - val_accuracy: 0.9324 - 535ms/epoch - 15ms/step Epoch 12/20 36/36 - 1s - loss: 0.1679 - accuracy: 0.9090 - val_loss: 0.1702 - val_accuracy: 0.9365 - 509ms/epoch - 14ms/step Epoch 13/20 36/36 - 1s - loss: 0.1734 - accuracy: 0.9168 - val_loss: 0.1304 - val_accuracy: 0.9355 - 510ms/epoch - 14ms/step Epoch 14/20 36/36 - 1s - loss: 0.1660 - accuracy: 0.9215 - val_loss: 0.1574 - val_accuracy: 0.9365 - 705ms/epoch - 20ms/step Epoch 15/20 36/36 - 1s - loss: 0.1906 - accuracy: 0.9194 - val_loss: 0.1706 - val_accuracy: 0.9253 - 811ms/epoch - 23ms/step Epoch 16/20 36/36 - 1s - loss: 0.1937 - accuracy: 0.9035 - val_loss: 0.1697 - val_accuracy: 0.9273 - 803ms/epoch - 22ms/step Epoch 17/20 36/36 - 1s - loss: 0.1791 - accuracy: 0.9114 - val_loss: 0.1353 - val_accuracy: 0.9345 - 808ms/epoch - 22ms/step Epoch 18/20 36/36 - 1s - loss: 0.1833 - accuracy: 0.9188 - val_loss: 0.1636 - val_accuracy: 0.9172 - 808ms/epoch - 22ms/step Epoch 19/20 36/36 - 1s - loss: 0.1759 - accuracy: 0.9127 - val_loss: 0.1476 - val_accuracy: 0.9375 - 800ms/epoch - 22ms/step Epoch 20/20 36/36 - 1s - loss: 0.1963 - accuracy: 0.9093 - val_loss: 0.1470 - val_accuracy: 0.9334 - 597ms/epoch - 17ms/step Validation accuracy for Model of [Learning Rate 0.1 | Num Filters 128 | Dropout Rate 0.7 | FilterSize 7]: 0.9334349632263184
print("Best Model Validation Accuracy: " + str(best_val_accuracy))
print("Best Model Parameters: ")
print(best_model_parameters)
Best Model Validation Accuracy: 0.9766260385513306
Best Model Parameters:
{'learning_rate': 0.01, 'num_filters': 128, 'droupout_rate': 0.3, 'filter_size': 7}
10. Results and Discussion¶
Model Results¶
# Train the best model on the full training data
best_model.fit(X_train_res, y_train_res, epochs=num_epochs, batch_size=batch_size, verbose=1)
# Evaluate the best model on the test set
y_pred = best_model.predict(X_test)
y_pred_binary = (y_pred > 0.5).astype(int) # Convert probabilities to binary predictions (0 or 1)
accuracy = accuracy_score(y_test, y_pred_binary)
precision = precision_score(y_test, y_pred_binary)
recall = recall_score(y_test, y_pred_binary)
f1 = f1_score(y_test, y_pred_binary)
roc_auc = roc_auc_score(y_test, y_pred_binary)
conf_matrix = confusion_matrix(y_test, y_pred_binary)
print(f"Accuracy: {accuracy}")
print(f"Precision: {precision}")
print(f"Recall: {recall}")
print(f"F1 Score: {f1}")
print(f"ROC AUC Score: {roc_auc}")
print(f"Confusion Matrix:\n{conf_matrix}")
Epoch 1/20 36/36 [==============================] - 1s 14ms/step - loss: 0.0505 - accuracy: 0.9811 Epoch 2/20 36/36 [==============================] - 1s 15ms/step - loss: 0.0561 - accuracy: 0.9788 Epoch 3/20 36/36 [==============================] - 0s 14ms/step - loss: 0.0485 - accuracy: 0.9814 Epoch 4/20 36/36 [==============================] - 1s 15ms/step - loss: 0.0460 - accuracy: 0.9818 Epoch 5/20 36/36 [==============================] - 1s 14ms/step - loss: 0.0433 - accuracy: 0.9823 Epoch 6/20 36/36 [==============================] - 1s 15ms/step - loss: 0.0458 - accuracy: 0.9826 Epoch 7/20 36/36 [==============================] - 1s 14ms/step - loss: 0.0443 - accuracy: 0.9822 Epoch 8/20 36/36 [==============================] - 1s 17ms/step - loss: 0.0434 - accuracy: 0.9822 Epoch 9/20 36/36 [==============================] - 1s 16ms/step - loss: 0.0395 - accuracy: 0.9848 Epoch 10/20 36/36 [==============================] - 1s 16ms/step - loss: 0.0419 - accuracy: 0.9832 Epoch 11/20 36/36 [==============================] - 1s 16ms/step - loss: 0.0419 - accuracy: 0.9828 Epoch 12/20 36/36 [==============================] - 1s 15ms/step - loss: 0.0377 - accuracy: 0.9846 Epoch 13/20 36/36 [==============================] - 1s 16ms/step - loss: 0.0382 - accuracy: 0.9855 Epoch 14/20 36/36 [==============================] - 1s 16ms/step - loss: 0.0422 - accuracy: 0.9829 Epoch 15/20 36/36 [==============================] - 1s 16ms/step - loss: 0.0450 - accuracy: 0.9825 Epoch 16/20 36/36 [==============================] - 1s 16ms/step - loss: 0.0421 - accuracy: 0.9829 Epoch 17/20 36/36 [==============================] - 1s 17ms/step - loss: 0.0407 - accuracy: 0.9848 Epoch 18/20 36/36 [==============================] - 1s 26ms/step - loss: 0.0445 - accuracy: 0.9812 Epoch 19/20 36/36 [==============================] - 1s 25ms/step - loss: 0.0432 - accuracy: 0.9827 Epoch 20/20 36/36 [==============================] - 1s 24ms/step - loss: 0.0391 - accuracy: 0.9833 31/31 [==============================] - 0s 5ms/step Accuracy: 0.9736040609137055 Precision: 0.9571428571428572 Recall: 0.9220183486238532 F1 Score: 0.9392523364485982 ROC AUC Score: 0.9551421599703361 Confusion Matrix: [[758 9] [ 17 201]]
# Visualising the accuracy and loss graph for the model
history = best_model.fit(X_train_res, y_train_res, epochs=20, batch_size=300, validation_data=(X_val, y_val), verbose=0)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
Model Interpretability: Explainable AI (XAI)¶
# Run SHAP explainer on a subset of training data because running on entire training set will take very long
explainer = shap.DeepExplainer(best_model,shap.sample(X_train_res, 100))
# Check the shape of the current training set
X_train_res.shape
(10726, 21, 1)
# Check the shape of the current test set
X_test.shape
(985, 21, 1)
# Get shap values for each point in dataset
shap_values = explainer.shap_values(X_test)
# Check that the shape of the shap_values list is the same as that of the test set
shap_values[0].shape
(985, 21, 1)
# Both training and test set are three-dim because that is how the model receives its input data as a 1D sequence
# However, to visualize the impact of variables on a plot, we require the data to be in 2D
# We know there are 985 obs and 21 features in th test set, meaning we should flatten the third dimension
X_test_flatten = X_test.reshape(985, 21)
# Check the shape of test set
X_test_flatten.shape
(985, 21)
# The same explanation is applied to the shap values
shap_values_flatten = shap_values[0].reshape(985, 21)
# Check the shape of shap values
shap_values_flatten.shape
(985, 21)
In the force plot below, we show an example of how the additive Shapley values pushes the "negative" and "positive" forces to end up in a resultant prediciton of "0"
shap.initjs()
ind = 16
print(y_test.iloc[ind])
shap.force_plot(
explainer.expected_value.numpy(), shap_values_flatten[ind,:], X_test_flatten[ind,:],
feature_names=X_train.columns
)
0
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
In the force plot below, we show an example of how the additive Shapley values pushes the "negative" and "positive" forces to end up in a resultant prediciton of "1"
shap.initjs()
ind = 933
print(y_test.iloc[ind])
shap.force_plot(
explainer.expected_value.numpy(), shap_values_flatten[ind,:], X_test_flatten[ind,:],
feature_names=X_train.columns
)
1
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
shap.initjs()
shap.summary_plot(shap_values_flatten, X_test_flatten, feature_names=X_train.columns)
The points on the summary plot represent the Shapley value of the feature for each data point and the jittered points give us a sense of distribution of the Shapley values per feature. The features are ranked from highest to lowest interms of their contribution to a particular prediction direction. We see that the top three features are “ERC20_most_sent_token_valid_name”, “Time Diff between first and last (Mins) and “ERC20 min val rec”.